Post-Training Research Engineer

Posted 18 Days Ago
Be an Early Applicant
San Francisco, CA, USA
Hybrid
200K-275K Annually
Mid level
Software
The Role
Build in-house tools for post-training models, focusing on improving model efficiency using various ML techniques across the system stack.
Summary Generated by Built In

ABOUT BASETEN

Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.

We are looking for an engineer with strong experience in machine learning and solid foundations in maths and computer science to join our growing Post-Training team at Baseten.

Custom models are instrumental to the success of Baseten customers. By inference volume, the overwhelming majority of traffic at Baseten is to and from models that have been post-trained in some way, whether that be through reinforcement learning, supervised finetuning, a recent technique from the literature, or an in-house research technique from Baseten. The Post-Training team is responsible for the success of our customers’ post-trained models, and we employ a wide array of techniques to produce models that are more efficient and higher quality than even the biggest closed source models for the customer’s specific needs.

Your role as a research engineer is to build the in-house tooling to support all of this. We care about training a wide spectrum of different model architectures with a variety of techniques efficiently and at scale. At times this involves zooming deep into a particular technical topic, but more often if involves working across the stack as a whole - systems-level concepts like Kubernetes, cgroups, storage systems, and networking topologies, as well as PyTorch distributed tensor computation, and GPU kernels.

RECENT RESEARCH

  • Dense, on-policy or both?

  • Repeated kv cache for long-running agents

  • Distillation without the dark – replicating black-box on-policy distillation on Baseten

We don’t have a rigid set of skills, but here’s some of what we’re looking for:

  • A deep understanding of modern ML techniques and tools for training transformers

  • Advanced experience in a tensor/array computation library like PyTorch, TensorFlow, Jax, or similar

  • A detailed understanding of transformer training parallelism strategies like data parallelism, sharded data parallelism, tensor parallelism, pipeline parallelism, context parallelism

  • The experience and knowledge to profile and improve the performance of a distributed GPU program in PyTorch or a similar library

  • The ability to perform roofline analysis on a transformer training setup

  • A willingness to dive into messy problems, work with researchers, derive specifications by asking important questions, and execute

  • Familiarity with HPC and distributed computing platforms like Slurm, Ray, Kubernetes, and Dask

  • Familiarity with cluster networking technology like Infiniband, RoCE, GPUDirect

  • Solid fundamentals in operating systems concepts like processes, files, kernel drivers, containerisation, and networking protocols

  • A sense of creativity and willingness to ask difficult questions about our approach, assumptions, and tooling choices

BENEFITS

  • Competitive compensation, including meaningful equity.

  • 100% coverage of medical, dental, and vision insurance for employee and dependents

  • Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)

  • Paid parental leave

  • Company-facilitated 401(k)

  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.

Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.

At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.

We are an Equal Opportunity Employer and will consider qualified applicants with criminal histories in a manner consistent with applicable law (by example, the requirements of the San Francisco Fair Chance Ordinance, where applicable).

Top Skills

Cgroups
Dask
Gpudirect
Infiniband
Jax
Kubernetes
PyTorch
Ray
Roce
Slurm
TensorFlow
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
59 Employees

What We Do

At Baseten we provide all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. Get started in minutes, and avoid getting tangled in complex deployment processes. You can deploy best-in-class open-source models and take advantage of optimized serving for your own models. We also utilize horizontally scalable services that take you from prototype to production, with light-speed inference on infra that autoscales with your traffic. Best in class doesn't mean breaking the bank. Run your models on the best infrastructure without running up costs by taking advantage of our scaled-to-zero feature

Similar Jobs

Skild AI Logo Skild AI

Research Engineer, Post-training & Deployment

Artificial Intelligence • Robotics • Business Intelligence
Easy Apply
In-Office
San Mateo, CA, USA
24 Employees
100K-300K Annually
In-Office
San Francisco, CA, USA
2217 Employees

OpenAI Logo OpenAI

Scientist

Artificial Intelligence • Machine Learning • Generative AI
In-Office
San Francisco, CA, USA
224 Employees

Lila Sciences Logo Lila Sciences

Research Engineer, LLM Post-Training

Artificial Intelligence • Software
Easy Apply
In-Office
2 Locations
224 Employees
116K-170K Annually

Similar Companies Hiring

Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees
Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account