Research Engineer - Data

Posted 2 Days Ago
Menlo Park, CA, USA
In-Office
350K-400K Annually
Mid level
Artificial Intelligence • Hardware • Information Technology • Robotics
From bits to atoms.
The Role
You will manage the data strategy for research, sourcing datasets, building pipelines, ensuring data quality, and collaborating with researchers to optimize model training.
Summary Generated by Built In
About Periodic Labs

The most important scientific discoveries of our time won’t happen in a traditional lab. We’re an AI and physical sciences company building state-of-the-art models to accelerate breakthroughs across materials, energy, and beyond. Backed by world-class investors and growing rapidly, we operate at the pace the frontier requires. Our team brings deep expertise, genuine ownership, and an insatiable drive to push the boundaries of what’s scientifically possible.

About the Role

You will build and drive the data foundation for our research efforts. This means owning data strategy end-to-end: sourcing and procuring external datasets, integrating internally generated experimental data into the training stack, and ensuring the team always has the right data — in the right shape — to train and improve frontier models.

This role sits at the intersection of data engineering, research infrastructure, and strategy. You will work closely with pretraining, midtraining, and RL researchers to understand what data the models need, then build the pipelines and systems to get it there. The work spans collecting and organizing diverse data sources, improving data quality through deduplication and preprocessing, and ensuring that new experimental results are incorporated in a structured, repeatable way that makes them useful for model development.

What You’ll Do
  • Own data strategy across the training stack — identifying gaps, evaluating new sources, and shaping the overall data roadmap in collaboration with research leads

  • Source, evaluate, and procure external datasets across scientific domains including chemistry, physics, materials science, mathematics, and lab instrumentation

  • Build and maintain robust pipelines for ingesting, processing, and versioning large-scale datasets from heterogeneous sources

  • Design and implement data quality systems including deduplication, domain classification, quality filtering, and format normalization at scale

  • Integrate internally generated experimental data — from lab instrumentation, simulations, and model outputs — into the training stack in a structured and repeatable way

  • Build tooling that makes it easy for researchers to inspect, query, and understand the data that goes into training runs

  • Instrument data pipelines with metadata, lineage tracking, and versioning so experiments are reproducible and data decisions are auditable

  • Collaborate with pretraining and midtraining engineers on token budget management, data mixing ratios, and curriculum design

  • Stay current with research on data-efficient training, synthetic data generation, and data selection methods — and bring relevant ideas into production

You Will Thrive in This Role If You Have
  • Experience building large-scale data pipelines for LLM pretraining or midtraining, including web-scale or scientific corpora

  • Expertise in data quality techniques such as exact and fuzzy deduplication (MinHash, SimHash), perplexity filtering, classifier-based quality scoring, and PII scrubbing

  • Experience working with diverse scientific data formats — papers, patents, structured databases, simulation outputs, lab instrument exports — and normalizing them for model consumption

  • Experience with distributed data processing frameworks such as Apache Spark, Ray, or Dask at multi-terabyte to petabyte scale

  • Familiarity with dataset versioning, lineage tracking, and reproducibility tooling such as DVC, Delta Lake, or custom solutions

  • Experience sourcing and evaluating third-party datasets, including licensing considerations and quality assessment

  • Strong Python engineering skills and comfort building production-quality tooling in a research environment

  • Experience collaborating directly with ML researchers to translate data needs into pipeline requirements and back again

  • A research-oriented mindset — you run experiments on data, measure outcomes, and iterate with rigor

Especially Strong Candidates May Also Have
  • Experience curating scientific datasets specifically for domain-adaptive continued pretraining or instruction tuning

  • Familiarity with synthetic data generation methods, including model-generated data pipelines and quality verification

  • A background in a physical science or engineering discipline that informs how you think about scientific data quality and structure

  • Experience with multimodal data — integrating text, structured numerical data, molecular representations, or spectral data into unified training pipelines

Mechanics

Minimum education: Bachelor’s degree or an equivalent combination of education and training or experience

Location: Our lab is located in Menlo Park and we prefer folks to be located in Menlo Park or San Francisco but can be flexible based on role

Compensation: The annual base compensation range for this role is $350,000-400,000 commensurate with experience

Visa sponsorship: Yes, we sponsor visas and will do everything we can to assist in this process with our legal support.

We’re building a team of the world’s best — the scientists, engineers, and problem-solvers who don’t just follow the frontier, they define it. If you’re driven to bring AI to life in the physical world and make discoveries that have never been made before, you belong here.

Skills Required

  • Experience building large-scale data pipelines for LLM pretraining or midtraining, including web-scale or scientific corpora
  • Expertise in data quality techniques such as exact and fuzzy deduplication, perplexity filtering, classifier-based quality scoring, and PII scrubbing
  • Experience working with diverse scientific data formats and normalizing them for model consumption
  • Experience with distributed data processing frameworks such as Apache Spark, Ray, or Dask at multi-terabyte to petabyte scale
  • Strong Python engineering skills and comfort building production-quality tooling in a research environment
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
32 Employees
Year Founded: 2025

What We Do

We're building AI scientists and the autonomous laboratories for them to operate.

Similar Jobs

Eventual Logo Eventual

Research Engineer, Multimodal Data

Artificial Intelligence • Machine Learning • Software
In-Office
San Francisco, CA, USA
20 Employees
150K-250K Annually
Hybrid
3 Locations
92 Employees

Anthropic Logo Anthropic

Software Engineer

Artificial Intelligence • Natural Language Processing • Generative AI
In-Office
2 Locations
2500 Employees
320K-405K Annually

1X Technologies Logo 1X Technologies

AI Research Engineer - Data Infrastructure

Artificial Intelligence • Robotics • Automation • Manufacturing
In-Office
San Carlos, CA, USA
1021 Employees
180K-250K Annually

Similar Companies Hiring

Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Golden Pet Brands Thumbnail
Digital Media • eCommerce • Information Technology • Marketing Tech • Pet • Retail • Social Media
El Segundo, California
178 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account