Exa is building a search engine from scratch to serve every AI application. We build massive-scale infrastructure to crawl the web, train state-of-the-art embedding models to index it, and develop super high performant vector databases in rust to search over it. We also own a $5m H200 GPU cluster and routinely run batchjobs with 10s of thousands of machines. This isn't your average startup :)
On the ML team, we train foundational models for search. Our goal is to build systems that can instantly filter the world's knowledge to exactly what you want, no matter how complex your query. Basically, put the web into an extremely powerful database.
We're looking for an ML research engineer to train embedding models for perfect search over the web. The role involves dreaming up novel transformer-based search architectures, creating datasets, creating evals, beating our internal SOTA, and repeat.
Desired Experience
You have graduate-level ML experience (or are an exceptionally strong undergrad)
You can code up a transformer from scratch in pytorch
You like creating large-scale datasets and diving deeply into the data
You care about the problem of finding high quality knowledge and recognize how important this is for the world
Example Projects
Pre-training -- train a hundred billion parameter model
Finetuning -- Build an RLAIF pipeline for search
Dream up a novel architecture for search in the shower, then code it up and beat our best model's top score
Build an eval system that answers how do we know we're advancing our search quality? (this is an incredibly difficult question to answer)
This is an in-person opportunity in San Francisco. We're happy to sponsor international candidates (e.g., STEM OPT, OPT, H1B, O1, E3).
Top Skills
What We Do
Exa was built with a simple goal — to organize all knowledge. After several years of heads-down research, we developed novel representation learning techniques and crawling infrastructure so that LLMs can intelligently find relevant information.







