Research Engineer, Agentic AI Evals

Posted 20 Days Ago
20 Locations
In-Office or Remote
Entry level
Artificial Intelligence • Information Technology • Software
The Role
The Research Engineer will build environments for evaluation datasets, deliver custom datasets, and improve the evaluation harness based on skills and interests.
Summary Generated by Built In
About HUD

HUD (YC W25) is developing agentic evals for Computer Use Agents (CUAs) that browse the web. Our CUA Evals framework is the first comprehensive evaluation tool for CUAs.

Our Mission: People don't actually know if AI agents are working. To make AI agents work in the real world, we need detailed evals for a huge range of tasks.

We're backed by Y Combinator, and work closely with frontier AI labs to provide agent evaluation infrastructure at scale.

About the role

We're looking for a research engineer to help build out task configs and environments for evaluation datasets on HUD's CUA evaluation framework.

Responsibilities
  • Build out environments for HUD's CUA evaluation datasets, including evals for safety redteaming, general business tasks, long-horizon agentic tasks etc.

  • Deliver custom CUA datasets and evaluation pipelines requested by clients

  • Contribute to improving the HUD evaluation harness, depending on your interests, skills, and current organizational priorities. (Optional, but highly valued!)

Experience

Technical Skills

  • Proficiency in Python, Docker, and Linux environments

  • React experience for frontend development

  • Production-level software development experience preferred

  • Strong technical aptitude and demonstrated problem-solving ability

Strong candidates may have:

  • Startup experience in early-stage technology companies with ability to work independently in fast-paced environments

  • Strong communication skills for remote collaboration across time zones

  • Familiarity with current AI tools and LLM capabilities

  • Understanding of safety and alignment considerations in AI systems

  • Evidence of rapid learning and adaptability in technical environments (e.g. programming competitions)

  • Have hands-on experience with or contributed to LLM evaluation frameworks (EleutherAI, Inspect, or similar)

  • Built custom evaluation pipelines or datasets

  • Worked with agentic or multimodal AI evaluation systems

We prioritize technical aptitude and learning potential over years of experience. Motivated candidates are encouraged to apply even if they don't meet all criteria.

Representative projects:
  • Creating and solving challenging competitive programming problem-sets

  • Curating large high-quality datasets, especially for research and evaluation of multimodal AI agents

  • Designing complex, functional fullstack applications. Bonus points if they have users / adopters.

We prioritise contributions that show quality and quantity, such as building out large, high-quality datasets. Imagine making about ~10 small puzzles in mock web environments a day.

Team & Company Details
  • Team Size: ~15 people currently, mostly full-time in-person, but some remote.

  • Our team: Our team includes 4 international Olympiad medallists (IOI, ILO, IPhO), serial AI startup founders, and researchers with publications at ICLR, NeurIPS etc

  • Company stage: We have received $2 million in seed funding, plus very strong demand and revenue growth beyond that. We are scaling profitably and fast to meet demand.

Logistics
  • Employment: Fulltime preferred, but willing to consider part-time/internship arrangements for exceptional candidates.

  • Location: Fully remote-friendly. We already have several fulltime, 100% remote hires. But if you’re in the San Francisco Bay Area or Singapore, we do have an office you can work together in. We do prefer applicants who can show up to meetings in Pacific Time (UTC-7:00/8:00) or China/Singapore Time (UTC +8:00).

  • Visa Sponsorship: We provide support for relocation and visas for strong full-time candidates to USA or Singapore. For part-time/contract/internship arrangements, we'll work fully remote (which makes things simpler anyway).

  • Timeline: Applications are rolling. The process should involve 1 initial call, 1 five-hour take-home assignment and 1 paid, weeklong work trial before final offer.

Due to high volume, we may not actively respond to every application, but feel free to contact us at [email protected] or elsewhere if we missed your application!

Top Skills

Docker
Linux
Python
React
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, CA
10 Employees
Year Founded: 2025

What We Do

The all-in-one platform for evaluations on computer use and browser use AI agents.

Similar Jobs

Sei Labs Logo Sei Labs

Scientist

Blockchain • Web3
In-Office or Remote
49 Locations

Sitemate Logo Sitemate

Machine Learning Engineer

Cloud • Software • Analytics • App development
In-Office or Remote
18 Locations
115K-200K Annually

Sitemate Logo Sitemate

Full-stack Engineer

Cloud • Software • Analytics • App development
In-Office or Remote
18 Locations
85K-160K Annually

P2P.org Logo P2P.org

Ecosystem Lead - Solana

Information Technology
In-Office or Remote
20 Locations

Similar Companies Hiring

Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account