Tavily is building the search engine for AI agents. We believe the future of work will be led by AI agents, and that requires restructuring how the web is accessed. Our Search API provides web access for AI agents, enabling real-time internet search optimized for LLMs and Retrieval-Augmented Generation (RAG).
We are backed by leading investors and serve developers and enterprises worldwide. Our team is small, fast-moving, and ambitious — we ship quickly, iterate constantly, and care deeply about impact.
What You’ll DoBe the expert in designing and scaling Tavily’s core web intelligence systems. You’ll:
Design and operate distributed, high-throughput data acquisition pipelines that power Tavily’s products.
Build and maintain backend services (Node.js / Python) and automation frameworks.
Implement intelligent orchestration and browser fingerprinting / identity simulation techniques so automation behaves adaptively and human-like in dynamic environments.
Drive reliability, performance, observability, and incident response across production systems.
Partner with product/AI teams to wire data workflows into downstream applications and developer experiences.
2–5 years of professional software engineering experience.
Comfortable in a fast-paced startup environment with high ownership.
Solid backend fundamentals in Node.js, Python (FastAPI), SQL, MongoDB (or similar).
Hands-on experience with automation/orchestration tools (e.g., Playwright, Puppeteer, Cheerio) and concurrency at scale.
Working knowledge of browser internals, fingerprinting/identity profiles, and resilient, human-like automation patterns.
Strong coding practices: testing, code review, secure-by-default design, and production-first mindset.
Familiarity with cloud infra, containerization (Docker), Kubernetes, CI/CD.
Experience integrating AI/ML components or retrieval systems into production pipelines.
Top Skills
What We Do
Search. Extract. Crawl. The web access stack built for builders, by builders.
Tavily powers the next generation of agents with a suite of tools for real-time Search, structured data Extraction, and fully-rendered Crawling — everything agents need to access and reason over the live web.
Purpose-built for RAG, autonomy, and production-grade agent systems.






