Machine Learning Engineer - Training & Infrastructure

Reposted 18 Days Ago
San Francisco, CA
In-Office
Mid level
Artificial Intelligence • Software
The Role
The role involves managing LLM training operations, optimizing GPU training on clusters, and collaborating with researchers on scalable machine learning infrastructure.
Summary Generated by Built In

About P-1 AI:

We are building an engineering AGI. We founded P-1 AI with the conviction that the greatest impact of artificial intelligence will be on the built world—helping mankind conquer nature and bend it to our will. Our first product is Archie, an AI engineer capable of quantitative and spatial reasoning over physical product domains that performs at the level of an entry-level design engineer. We aim to put an Archie on every engineering team at every industrial company on earth.

Our founding team includes the top minds in deep learning, model-based engineering, and industries that are our customers. We just closed a $23 million seed round led by Radical Ventures that includes a number of other AI and industrial luminaries (from OpenAI, DeepMind, etc.).

About the Role:

We’re looking for an experienced engineer to take ownership of LLM training operations across our applied research team. Your focus will be on making large-scale GPU training run reliably, efficiently, and fast on a dedicated mid-size GPU cluster and possibly on cloud platforms as well.

You’ll work closely with researchers and ML engineers developing new models and agentic systems, ensuring their experiments scale smoothly across multi-node GPU clusters. From debugging NCCL deadlocks to optimizing FSDP configs, you’ll be the go-to person for training infrastructure and performance.

What You’ll Do:

  • Own the training pipeline for large-scale LLM fine-tuning and post-training workflows

  • Configure, launch, monitor, and debug multi-node distributed training jobs using FSDP, DeepSpeed, or custom wrappers

  • Contribute to upstream and internal forks of training frameworks like TorchTune, TRL, and Hugging Face Transformers

  • Tune training parameters, memory footprints, and sharding strategies for optimal throughput

  • Work closely with infra and systems teams to maintain the health and utilization of our GPU clusters (e.g., Infiniband, NCCL, Slurm, Kubernetes)

  • Implement features or fixes to unblock novel use cases in our LLM training stack

About you:

  • 3+ years working with large-scale ML systems or training pipelines

  • Deep familiarity with PyTorch, especially distributed training via FSDP, DeepSpeed, or DDP

  • Comfortable navigating training libraries like TorchTune, Accelerate, or Trainer APIs

  • Practical experience with multi-node GPU training, including profiling, debugging, and optimizing jobs

  • Understanding of low-level components like NCCL, Infiniband, CUDA memory, and model partitioning strategies

  • You enjoy bridging research and engineering—making messy ideas actually run on hardware

Nice to Have:

  • Experience maintaining Slurm, Ray, or Kubernetes clusters

  • Past contributions to open-source ML training frameworks

  • Exposure to model scaling laws, checkpointing formats (e.g., HF sharded safetensors vs. distcp), or mixed precision training

  • Familiarity with on-policy reinforcement learning setups with inference (policy rollouts) as part of the training loop, such as GRPO, PPO, or A2C

  • Experience working at a startup

Interview process:

  • Initial screening - Head of Talent (30 mins)

  • Hiring manager interview - Head of AI (45 mins)

  • Technical Interview - AI Chief Scientist and/or Head of AI (45 mins)

  • Culture fit / Q&A (maybe in person) - with co-founder & CEO (45 mins)

Top Skills

Accelerate
Cuda
Deepspeed
Fsdp
Hugging Face Transformers
Infiniband
Kubernetes
Nccl
PyTorch
Slurm
Torchtune
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
11 Employees
Year Founded: 2024

What We Do

Building engineering AGI for the physical world.

Similar Jobs

Snap Inc. Logo Snap Inc.

Software Engineer

Artificial Intelligence • Cloud • Machine Learning • Mobile • Software • Virtual Reality • App development
Hybrid
6 Locations
5000 Employees
133K-235K Annually

Hedra Logo Hedra

Machine Learning Engineer

Consumer Web • Digital Media • Enterprise Web • Marketing Tech • News + Entertainment • Software • Generative AI
In-Office
San Francisco, CA, USA
14 Employees

IntelliPro Group Inc. Logo IntelliPro Group Inc.

Machine Learning Engineer

HR Tech • Information Technology
In-Office
San Francisco, CA, USA
638 Employees
150K-250K Annually

Snap Inc. Logo Snap Inc.

Software Engineer

Artificial Intelligence • Cloud • Machine Learning • Mobile • Software • Virtual Reality • App development
Hybrid
5 Locations
5000 Employees
178K-313K Annually

Similar Companies Hiring

Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account