Machine Learning Systems Engineer

Posted 2 Days Ago
Be an Early Applicant
Hiring Remotely in EU
Remote
Mid level
Artificial Intelligence
The Role
The Machine Learning Systems Engineer will enhance machine learning infrastructure, contribute to the ScalarLM open source project, optimize training algorithms, and collaborate with the open source community.
Summary Generated by Built In

Who We Are

At RelationalAI, we are building the future of intelligent data systems through our cloud-native relational knowledge graph management system—a platform designed for learning, reasoning, and prediction.

We are a remote-first, globally distributed team with colleagues across six continents. From day one, we’ve embraced asynchronous collaboration and flexible schedules, recognizing that innovation doesn’t follow a 9-to-5.

We are committed to an open, transparent, and inclusive workplace. We value the unique backgrounds of every team member and believe in fostering a culture of respect, curiosity, and innovation. We support each other’s growth and success—and take the well-being of our colleagues seriously. We encourage everyone to find a healthy balance that affords them a productive, happy life, wherever they choose to live.

We bring together engineers who love building core infrastructure, obsess over developer experience, and want to make complex systems scalable, observable, and reliable.

Machine Learning Systems Engineer 

Location: Remote (San Francisco Bay Area / North America / South America)

Experience Level: 3+ years of experience in machine learning engineering or research
About ScalarLM

This role will involve heavily working with the ScalarLM framework and team.

ScalarLM unifies vLLM, Megatron-LM, and HuggingFace for fast LLM training, inference, and self-improving agents—all via an OpenAI-compatible interface. ScalarLM builds on top of the vLLM inference engine, the Megatron-LM training framework, and the HuggingFace model hub. It unifies the capabilities of these tools into a single platform, enabling users to easily perform LLM inference and training, and build higher lever applications such as Agents with a twist - they can teach themselves new abilities via back propagation.

ScalarLM is inspired by the work of Seymour Roger Cray (September 28, 1925 – October 5, 1996), an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades, and founded Cray Research, which built many of these machines. Called "the father of supercomputing", Cray has been credited with creating the supercomputer industry.

It is a fully open source project (CC-0 Licensed) focused on democratizing access to cutting-edge LLM infrastructure that combines training and inference in a unified platform, enabling the development of self-improving AI agents similar to DeepSeek R1.

ScalarLM is supported and maintained by TensorWave in addition to RelationalAI.

The Role:
As a Machine Learning Engineer, you will contribute directly to our machine learning infrastructure, to the ScalarLM open source codebase, and build large-scale language model applications on top of it. You’ll operate at the intersection of high-performance computing, distributed systems, and cutting-edge machine learning research, developing the fundamental infrastructure that enables researchers and organizations worldwide to train and deploy large language models at scale.

This is an opportunity to take on technically demanding projects, contribute to foundational systems, and help shape the next generation of intelligent computing.

You Will: 

  • Contribute code and performance improvements to the open source project.
  • Develop and optimize distributed training algorithms for large language models.
  • Implement high-performance inference engines and optimization techniques.
  • Work on integration between vLLM, Megatron-LM, and HuggingFace ecosystems.
  • Build tools for seamless model training, fine-tuning, and deployment.
  • Optimize performance of advanced GPU architectures.
  • Collaborate with the open source community on feature development and bug fixes.
  • Research and implement new techniques for self-improving AI agents.

Who You Are

Technical Skills:

  • Programming Languages: Proficiency in both C/C++ and Python
  • High Performance Computing: Deep understanding of HPC concepts, including:
    • MPI (Message Passing Interface) programming and optimization
    • Bulk Synchronous Parallel (BSP) computing models
    • Multi-GPU and multi-node distributed computing
    • CUDA/ROCm programming experience preferred
  • Machine Learning Foundations:
    • Solid understanding of gradient descent and backpropagation algorithms
    • Experience with transformer architectures and the ability to explain their mechanics
    • Knowledge of deep learning training and its applications
    • Understanding of distributed training techniques (data parallelism, model parallelism, pipeline parallelism, large batch training, optimization)

Research and Development 

  • Publications: Experience with machine learning research and publications preferred
  • Research Skills: Ability to read, understand, and implement techniques from recent ML research papers
  • Open Source: Demonstrated commitment to open source development and community collaboration

Experience

  • 3+ years of experience in machine learning engineering or research.
  • Experience with large-scale distributed training frameworks (Megatron-LM, DeepSpeed, FairScale, etc.).
  • Familiarity with inference optimization frameworks (vLLM, TensorRT, etc.).
  • Experience with containerization (Docker, Kubernetes) and cluster management.
  • Background in systems programming and performance optimization.

Bonus points if:

  • PhD or MS in Computer Science, Computer Engineering, Machine Learning, or related field.
  • Experience with SLURM, Kubernetes, or other cluster orchestration systems.
  • Knowledge of mixed precision training, data parallel training, and scaling laws.
  • Experience with transformer architecture, pytorch, decoding algorithms.
  • Familiarity with high performance GPU programming ecosystem. 
  • Previous contributions to major open source ML projects.
  • Experience with MLOps and model deployment at scale.
  • Understanding of modern attention mechanisms (multi-head attention, grouped query attention, etc.).

Why RelationalAI

RelationalAI is committed to an open, transparent, and inclusive workplace. We value the unique backgrounds of our team. We are driven by curiosity, value innovation, and help each other to succeed and to grow. We take the well-being of our colleagues seriously, and offer flexible working hours so each individual can find a healthy balance that affords them a productive, happy life wherever they choose to live.

🌎 Global Benefits at RelationalAI

At RelationalAI, we believe that people do their best work when they feel supported, empowered, and balanced. Our benefits prioritize well-being, flexibility, and growth, ensuring you have the resources to thrive both professionally and personally.

  • We are all owners in the company and reward you with a competitive salary and equity.
  • Work from anywhere in the world.
  • Comprehensive benefits coverage, including global mental health support
  • Open PTO – Take the time you need, when you need it.
  • Company Holidays, Your Regional Holidays, and RAI Holidays—where we take one Monday off each month, followed by a week without recurring meetings, giving you the time and space to recharge.
  • Paid parental leave – Supporting new parents as they grow their families.
  • We invest in your learning & development
  • Regular team offsites and global events – Building strong connections while working remotely through team offsites and global events that bring everyone together.
  • A culture of transparency & knowledge-sharing – Open communication through team standups, fireside chats, and open meetings.

Country Hiring Guidelines:

RelationalAI hires around the world. All of our roles are remote; however, some locations might carry specific eligibility requirements.

Because of this, understanding location & visa support helps us better prepare to onboard our colleagues.

Our People Operations team can help answer any questions about location after starting the recruitment process.


Privacy Policy: EU residents applying for positions at RelationalAI can see our Privacy Policy here.

California residents applying for positions at RelationalAI can see our Privacy Policy here


RelationalAI is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, color, gender identity or expression, marital status, national origin, disability, protected veteran status, race, religion, pregnancy, sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.

Top Skills

C/C++
Cuda
Deepspeed
Docker
Fairscale
Kubernetes
Megatron-Lm
Mpi
Python
Tensorrt
Vllm
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Berkeley, CA
148 Employees
Year Founded: 2017

What We Do

Knowledge Graphs without Compromises

At RelationalAI, we are building the world’s fastest, most scalable, most expressive, most open relational knowledge graph management system (RKGMS), built on top of the world’s only complete relational reasoning engine that uses the knowledge and data captured in enterprise databases to learn and reason.

The results are stunning.

World-class talent driving knowledge graph innovation

We believe that future enterprise systems will be built with relational knowledge graphs as a foundation and that each component of the system will either be learned (via machine learning) or declared (via a reasoner).

The days of instructing the computer, step-by-step, on how to perform a task will be behind us.

Systems built this way, with fewer compute and human resources will increase margins, accelerate growth, and strengthen defensive moats.

At RelationalAI, we have brought together a group of leading researchers, data scientists, computer scientists and software engineers with extensive experience applying novel technologies to a wide range of complex problems in multiple industries.

Our team benefits from this unique combination of in-house expertise and active collaboration with the world’s foremost research institutions in areas ranging from machine learning and operations research to databases and programming languages. This collaboration regularly yields award-winning publications at the most respected academic conferences and journals.

Similar Jobs

Stream Logo Stream

Staff Software Engineer

Cloud • Machine Learning • Other • Software
In-Office or Remote
3 Locations

Dandy Logo Dandy

Senior Software Engineer

Computer Vision • Healthtech • Information Technology • Logistics • Machine Learning • Software • Manufacturing
Remote
EU

Form3 Logo Form3

Senior Full-stack Engineer

Fintech • Payments • Financial Services
Remote
2 Locations

P2P.org Logo P2P.org

Product Analyst

Information Technology
In-Office or Remote
35 Locations

Similar Companies Hiring

Scrunch AI Thumbnail
Software • SEO • Marketing Tech • Information Technology • Artificial Intelligence
Salt Lake City, Utah
Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account