Exa is building a search engine from scratch to serve every AI application. We build massive-scale infrastructure to crawl the web, train state-of-the-art embedding models to index it, and develop super high performant vector databases in rust to search over it. We also own a $5m H200 GPU cluster and routinely run batchjobs with 10s of thousands of machines. This isn't your average startup :)
Our Infrastructure Team builds the underlying tooling and infrastructure that powers all Exa's systems. Basically, we need more infra engineers to build the machine that builds the machine so that we can move as fast as possible as an engineering org. That could mean building GPU cluster orchestration in kubernetes, map-reduce batchjobs on Ray, or the best observability tooling in the world.
Desired Experience
You have experience designing and operating large-scale infrastructure -- GPU clusters or large kubernetes clusters or cloud batchjob systems
You bring an obsessive mindset — always thinking about reliability, observability, and optimization across the entire stack.
Example Projects
Build the kubernetes orchestration on a $20m GPU cluster
Scale our AWS batchjob system to handle map reduce jobs over 10s of thousands of machines
Design GPU scheduling software so we max out our cluster utilization
Build observability into our production systems
This is an in-person opportunity in San Francisco. We are open to sponsoring international candidates (e.g., STEM OPT, OPT, H1B, O1, E3).
Top Skills
What We Do
Exa was built with a simple goal — to organize all knowledge. After several years of heads-down research, we developed novel representation learning techniques and crawling infrastructure so that LLMs can intelligently find relevant information.



.png)





