Research, Post-Training Data

Reposted 23 Hours Ago
Easy Apply
Be an Early Applicant
San Francisco, CA, USA
In-Office
350K-475K Annually
Mid level
Artificial Intelligence • Information Technology
The Role
The role requires designing data collection strategies, developing pipelines for labeling and data generation, and researching human preferences to guide model behavior. Responsibilities include evaluating data quality and scalability, as well as publishing research to advance the AI community.
Summary Generated by Built In

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

The role of post-training researchers sits at the core of our roadmap. This is the critical bridge between raw model intelligence and a system that is actually useful, safe, and collaborative for humans.

Post-training data research work sits at the intersection of human insight and machine learning. Our work combines human and synthetic data techniques, along with other innovative approaches, to capture the nuances of human behavior and use them to steer models. We research and model the mechanisms that create value for people to explain, predict, and optimize for human preferences, behaviors, and satisfaction. Our goal is to turn research ideas into data by scoping well-run data labeling or collection campaigns, and understanding the science behind what makes the data high quality and useful to train our models. We also develop and evaluate quantitative metrics that measure the success and impact of our data and training interventions.

Beyond execution, we explores new paradigms for human-ai interaction and scalable oversight, experimenting with how humans can best supervise, guide, and collaborate with models. It’s interdisciplinary work that blends research, data operations, and technical implementation to advance the frontier of aligned, human-centered AI systems.

This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do
  • Design and execute data collection and synthesis strategies for post-training by combining human feedback, preference data, and synthetic examples to guide model behavior.
  • Develop pipelines and frameworks for scalable, high-quality human labeling, model-assisted labeling, and synthetic data generation.
  • Research and model human preferences and behavior, creating data-driven methods to improve reasoning, truthfulness, and helpfulness.
  • Iterate on evals: post-training involves a never-ending loop of defining a set of evaluations, optimizing them, and then realizing your existing evals don’t capture what matters. You’ll be responsible for both making numbers go up, and making sure the numbers are meaningful.
  • Design and evaluate metrics and benchmarks that measure data quality, alignment, and the real-world impact of post-training interventions.
  • Scale and explore: post-training will involve a combination of scaling the existing methodologies and developing new ones.
  • Publish and present research that moves the entire community forward. Share code, datasets, and insights that accelerate progress across industry and academia.
Skills and Qualifications

Minimum qualifications:

  • Strong engineering skills, ability to contribute code and debug in complex codebases.
  • Experience with data curation, human feedback, or synthetic data generation for large language models or similar systems.
  • Ability to design, run, and interpret experiments with scientific rigor and clarity.
  • Proficiency in Python and familiarity with at least one deep learning framework (e.g., PyTorch, TensorFlow, or JAX). Comfortable with debugging distributed training and writing code that scales.
  • Bachelor’s degree or equivalent experience in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding.
  • Clarity in communication, an ability to explain complex technical concepts in writing.

Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:

  • A strong grasp of probability, statistics, and ML fundamentals. You can look at experimental data and distinguish between real effects, noise, and bugs.
  • Prior experience with RLHF, RLAIF, preference modeling, or reward learning for large models.
  • Experience managing or analyzing human data collection campaigns or large-scale annotation workflows.
  • Research or engineering contributions in alignment, data-centric AI, or human-AI collaboration.
  • Familiarity with synthetic data pipelines, active learning, or model-assisted labeling
  • PhD in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding; or, equivalent industry research experience.
Logistics
  • Location: This role is based in San Francisco, California. 
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Top Skills

Jax
Python
PyTorch
TensorFlow
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San FranciscoC, CA
91 Employees

What We Do

Thinking Machines Lab is an artificial intelligence research and product company. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. While AI capabilities have advanced dramatically, key gaps remain. The scientific community's understanding of frontier AI systems lags behind rapidly advancing capabilities. Knowledge of how these systems are trained is concentrated within the top research labs, limiting both the public discourse on AI and people's abilities to use AI effectively. And, despite their potential, these systems remain difficult for people to customize to their specific needs and values. To bridge the gaps, we're building Thinking Machines Lab to make AI systems more widely understood, customizable and generally capable. We are scientists, engineers, and builders who've created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

Similar Jobs

In-Office
San Francisco, CA, USA
2217 Employees

CoreWeave Logo CoreWeave

Account Manager

Cloud • Information Technology • Machine Learning
In-Office
2 Locations
1450 Employees
165K-189K Annually

BAE Systems, Inc. Logo BAE Systems, Inc.

Devops Engineer

Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Hybrid
San Diego, CA, USA
40000 Employees
79K-135K Annually

BAE Systems, Inc. Logo BAE Systems, Inc.

Site Reliability Engineer

Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Hybrid
San Diego, CA, USA
40000 Employees
133K-226K Annually

Similar Companies Hiring

Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account