Exa is building a search engine from scratch to serve every AI agent. We build massive-scale infrastructure to crawl the web, train state-of-the-art embedding models to process it, and design super high performant vector databases in rust to search over it. If you like compute, we also own a $5M H200 GPU cluster (and soon 5x'ing that) and regularly spin up batchjobs with tens of thousands of machines.
We recently raised an $85M Series B from Benchmark, and we are rapidly building the most intelligent search engine in history. We’re high agency, low-ego, and united by the feeling that this is one of the last problems worth getting right.
As a backend engineer, you'd play a critical role in our backend system. We're pretty flexible on what projects people work on based on their skills and interests.
Desired Experience
You have experience writing and maintaining high throughput, low latency systems
You can build data processing pipelines that deal with millions of documents per day
You’re comfortable optimizing a system to an exceptional degree
You care about the problem of finding high quality information and recognize how important this is for the world
Plus: Experience in some high performance language (C++, Rust, etc.)
Example Projects
Recreate Google-level keyword search over 10 billion pages in 1 month
Build a state-of-the-art crawling system that works optimally for any website
Build a custom vector database that runs over a billion vectors in under 100ms
This is an in-person opportunity in San Francisco. We're happy to sponsor international candidates (e.g., STEM OPT, OPT, H1B, O1, E3). In addition to premium healthcare benefits (medical, dental, vision), we also offer fertility benefits and a monthly wellness stipend to all of our employees.
Skills Required
- Experience with high performance language
- Ability to optimize systems
- Interest in high quality information problems
What We Do
Exa was built with a simple goal — to organize all knowledge. After several years of heads-down research, we developed novel representation learning techniques and crawling infrastructure so that LLMs can intelligently find relevant information.









