Research Engineer – Benchmarking, Evals & Failure Analysis

Reposted 10 Days Ago
Be an Early Applicant
San Francisco, CA, USA
In-Office
Mid level
Artificial Intelligence • Software
We use AI to understand human ability and match talent with the opportunities they're best suited for.
The Role
As a Research Engineer, you will focus on benchmarking, evaluations, and failure analysis of AI models, enhancing model performance through systematic analysis and collaboration with teams.
Summary Generated by Built In
About Mercor

Mercor's mission is to organize human intelligence to power the AI economy. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development. Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.

Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society. Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our San Francisco, NYC, or London offices.

About the Role

As a Research Engineer at Mercor, you’ll work at the intersection of engineering and applied AI research. You’ll own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how we train and improve frontier language models.
Your work will define how we measure tool use, agentic behavior, and real-world reasoning. You’ll design and run evals, build rubrics and scorers, and turn failure analysis into actionable improvements for post-training, RLVR, and data pipelines.

What You’ll Do
  • Benchmarking: Design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning; ensure benchmarks scale with training and stay aligned with product and research goals.

  • Evaluation systems: Build and operate LLM evaluation systems end-to-end runs, scoring, dashboards, and reporting, so researchers and applied AI teams can track model performance and compare runs at scale.

  • Failure analysis: Run systematic failure analysis on model outputs (e.g., wrong tool use, reasoning errors, safety/alignment issues); categorize failure modes, quantify prevalence, and feed findings into reward design, data curation, and benchmark design.

  • Rubrics and evaluators: Create and refine rubrics, automated evaluators, and scoring frameworks that drive training and evaluation decisions; balance rigor with scalability (human vs. model-as-judge, calibration, agreement).

  • Data quality and usability: Quantify data usability, quality, and impact on key benchmarks; use evals and failure analysis to guide data generation, augmentation, and curation.

  • Cross-team collaboration: Work with AI researchers, applied AI teams, and data producers to align evals with training objectives and to prioritize benchmarks and failure analyses that matter most.

  • Ownership in a fast-paced environment: Operate in a high-iteration research setting with strong ownership of benchmarks, evals, and failure-analysis workflows.

What We’re Looking For
  • Strong applied research background, with focus on model evaluation, benchmarking, and/or failure analysis.

  • Strong coding skills and hands-on experience with ML models and evaluation code.

  • Solid grasp of data structures, algorithms, and backend systems.

  • Comfort with APIs, SQL/NoSQL, and cloud platforms for running and storing eval results.

  • Ability to reason about model behavior, experimental results, and data quality from evals and failure analyses.

  • Excitement to work in person in San Francisco five days a week in a high-intensity, high-ownership environment.

Nice To Have
  • Industry experience on a post-training or evaluation/benchmarking team (highest priority).

  • Publications at top-tier venues (NeurIPS, ICML, ACL), especially in evaluation or benchmarking.

  • Experience building or running LLM evaluations, benchmarks, or failure-analysis pipelines.

  • Experience with synthetic data generation, rubric design, or RL-style workflows that use evals for reward shaping.

  • Work samples or code (e.g., eval frameworks, benchmark suites, failure-analysis reports or tooling) that demonstrate relevant skills.

Benefits
  • Generous equity grant vested over 4 years

  • A $10K housing bonus (if you live within 0.5 miles of our office)

  • A $1.5K monthly stipend for meals

  • Free Equinox membership

  • Health insurance

 

Skills Required

  • Strong applied research background in model evaluation, benchmarking, or failure analysis
  • Strong coding skills and hands-on experience with ML models
  • Solid grasp of data structures, algorithms, and backend systems
  • Comfort with APIs, SQL/NoSQL, and cloud platforms
  • Ability to reason about model behavior and data quality

Mercor Compensation & Benefits Highlights

The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about Mercor and has not been reviewed or approved by Mercor.

  • Fair & Transparent Compensation Pay is considered competitive across many roles, with clear hourly ranges and an hourly/pay‑per‑task mix designed to align rates with expertise. The structure emphasizes transparent, appropriate pay levels and guarantees payment for legitimate logged time.
  • Strong & Reliable Incentives Payments are processed on a predictable weekly cadence via Stripe/Wise, and some tracks offer additional weekly bonus incentives for top performers. This combination of regular payouts and performance bonuses supports dependable earnings when projects are active.
  • Equity Value & Accessibility Select full‑time roles include generous equity grants alongside cash perks such as relocation and housing bonuses. These elements increase total compensation for those positions.

Mercor Insights

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, California
2,217 Employees
Year Founded: 2023

What We Do

We use AI to understand human ability and match talent with the opportunities they're best suited for.

Similar Jobs

Applied Systems Logo Applied Systems

Manager, Software Engineering

Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
Remote or Hybrid
United States
3040 Employees
115K-175K Annually

ServiceNow Logo ServiceNow

Global Partner Leader, Deloitte

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Santa Clara, CA, USA
29000 Employees
197K-325K Annually

ServiceNow Logo ServiceNow

Staff Software Engineer

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
San Diego, CA, USA
29000 Employees
169K-296K Annually

Enverus Logo Enverus

Consultant

Big Data • Information Technology • Software • Analytics • Energy
In-Office or Remote
3 Locations
1800 Employees
65K-70K Annually

Similar Companies Hiring

Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees
Onshore Thumbnail
Software
US
100 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account