Research Engineer - RL Infrastructure

Reposted 23 Days Ago
Hiring Remotely in San Francisco, CA, USA
In-Office or Remote
Mid level
Artificial Intelligence • Software
The Role
The Research Engineer will optimize large-scale RL training systems by enhancing performance in networking, memory, and computation. Responsibilities include designing low-level optimizations, collaborating with various teams to improve infrastructure, and contributing to open-source projects.
Summary Generated by Built In
Building Open Superintelligence Infrastructure

Prime Intellect is building the open superintelligence stack: from frontier agentic models to the infrastructure that enables anyone to train, adapt, and deploy them.

We unify globally distributed compute into a single control plane and pair it with the full reinforcement learning post-training stack: environments, secure sandboxes, verifiable evaluations, and our async RL trainer. We enable researchers, startups, and enterprises to run end-to-end RL at frontier scale, adapting models to real tools, workflows, and deployment environments.

We are looking for a Research Engineer to work on the systems layer behind large-scale RL training. This role is for someone who enjoys going deep on performance: optimizing kernels, improving memory and communication efficiency, scaling distributed workloads, and pushing the throughput and reliability of training systems closer to hardware limits.

If you care about making large-scale model training faster, cheaper, and more robust, we’d love to talk.

What You’ll Work On
  • Build and optimize the systems infrastructure behind large-scale RL and distributed training workloads.

  • Improve end-to-end training efficiency across compute, memory, networking, and scheduling layers.

  • Design and implement low-level performance optimizations, including kernels, communication paths, and runtime improvements.

  • Work on distributed training systems spanning data, tensor, and pipeline parallel workloads.

  • Help shape the architecture of our RL training stack, including async rollout and post-training systems.

  • Contribute to open-source libraries and internal infrastructure used for frontier-scale model training.

  • Collaborate closely with researchers and infrastructure engineers to translate bottlenecks into concrete systems improvements.

  • Stay at the frontier of training systems, inference systems, compiler/runtime tooling, and hardware-aware optimization techniques.

You May Be a Fit If You Have
  • Strong systems engineering experience in AI/ML infrastructure, especially around large-scale model training or inference.

  • Deep familiarity with PyTorch and distributed training frameworks such as PyTorch Distributed, DeepSpeed, FSDP, Megatron, vLLM, Ray, or related tooling.

  • Experience optimizing training performance across kernels, memory movement, communication overhead, or parallelization strategy.

  • Hands-on experience with large-scale training techniques including data parallelism, tensor parallelism, and pipeline parallelism.

  • Strong understanding of GPU architecture, profiling, and performance debugging.

  • Ability to identify bottlenecks across the stack and drive improvements from first principles.

  • Comfort working in a fast-moving environment with ambiguous problems and high ownership.

Especially Exciting
  • Experience writing or optimizing CUDA / Triton kernels.

  • Experience with compiler or runtime optimization for ML systems.

  • Experience working on RL training infrastructure, rollout systems, or asynchronous training pipelines.

  • Experience with multi-node GPU clusters and high-performance networking.

  • Contributions to open-source ML systems or infrastructure projects.

  • Interest in publishing technical work or sharing insights through engineering blogs and technical writing.

Why This Role Matters

The next frontier in AI will not be unlocked by models alone. It will be unlocked by systems that let those models train faster, adapt continuously, and operate across real environments at scale.

That infrastructure does not exist yet in the form the world needs.

We’re building it.

Benefits & Perks
  • Cash Compensation Range of $150-300k, plus equity.

  • Flexible work arrangements, with the option to work remotely or in person from our San Francisco office.

  • Visa sponsorship and relocation support for international candidates.

  • Quarterly team offsites, hackathons, conferences, and learning opportunities.

  • A deeply technical, high-agency team working on infrastructure for open superintelligence.

If you’re excited about building the systems foundation for frontier-scale RL and open superintelligence, we’d love to hear from you.

Skills Required

  • Strong systems engineering experience in AI/ML infrastructure
  • Deep familiarity with PyTorch and distributed training frameworks
  • Experience optimizing training performance across kernels and memory
  • Hands-on experience with large-scale training techniques
  • Strong understanding of GPU architecture and performance debugging
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, CA
16 Employees

What We Do

Prime Intellect democratizes AI development at scale. Our platform makes it easy to find global compute resources and train state-of-the-art models through distributed training across clusters. Collectively own the resulting open AI innovations, from language models to scientific breakthroughs.

Similar Jobs

Applied Systems Logo Applied Systems

Manager, Software Engineering

Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
Remote or Hybrid
United States
3040 Employees
115K-175K Annually

ServiceNow Logo ServiceNow

Global Partner Leader, Deloitte

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Santa Clara, CA, USA
29000 Employees
197K-325K Annually

ServiceNow Logo ServiceNow

Staff Software Engineer

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
San Diego, CA, USA
29000 Employees
169K-296K Annually

Enverus Logo Enverus

Consultant

Big Data • Information Technology • Software • Analytics • Energy
In-Office or Remote
3 Locations
1800 Employees
65K-70K Annually

Similar Companies Hiring

Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees
Onshore Thumbnail
Software
US
100 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account