The Deployed Engineering team works directly with companies building and running AI agents in production, helping turn ideas and prototypes into systems teams can rely on.
This is a hands-on, highly technical team that partners closely with customer engineers across the full lifecycle, from pre-sales evaluations to post-deployment advisory work. The focus is on achieving the technical win, co-designing agent architectures, and helping customers operate agents reliably at scale using the LangChain suite.
Deployed Engineers sit at the intersection of engineering, product, and go-to-market, shaping how LangChain is adopted in the field and feeding real-world insights back into the platform.
About the RoleThe Deployed Engineer…You’ll work on some of the hardest problems in applied AI — not demos, not research, but systems that real teams depend on in production. The feedback loop is fast, the impact is visible, and the work you do directly shapes how AI agents are built in the real world.
What You’ll DoCo-architect and co-build production AI agents with customer engineering teams
Own the technical win in pre-sales by designing POCs, answering deep technical questions, and guiding evaluations
Help customers deploy and operate agent-based applications such as conversational agents, research agents, and multi-step workflows
Advise customers post-sale on architecture, best practices, and roadmap-level decisions
Run technical demos, trainings, and workshops for developer audiences
Surface field feedback and contribute reusable patterns, cookbooks, and example code that scale across customers
Occasionally contribute code upstream when it meaningfully improves customer outcomes
3+ years in a relevant technical role (software engineering, customer engineering, solutions engineering, founding/product engineering), ideally in a startup or scale-up
Strong Python, JavaScript and systems fundamentals
Have designed agent-based or LLM-powered applications beyond simple API calls, including multi-step workflows, orchestration, and failure handling
Are comfortable working directly with customers during POCs, architecture reviews, and technical evaluations
Can explain technical tradeoffs clearly and build trust with developer audiences
Take responsibility for outcomes, not just recommendations
Have a bias toward action and enjoy figuring things out as you go
Are excited about operating AI agents in production, not just building demos
You’ve deployed AI agents in production, especially using LangChain, LangGraph, or similar frameworks
Worked with LLM evaluation, observability, or guardrails
Have experience with cloud environments (AWS, GCP, Azure), containers, and basic Kubernetes concepts
Have shipped and operated production software and are comfortable owning systems under real-world constraints
Annual OTE range: $150,000–$250,000 USD
Top Skills
What We Do
LangChain is the platform for building reliable agents. Our products power top engineering teams — from fast-growing startups like Lovable, Mercor, and Clay to global brands including AT&T, Home Depot, and Klarna. LangGraph is a low-level orchestration framework for building controllable agents and long-running workflows. It’s used in production by teams at Replit, Uber, LinkedIn, GitLab, and more. LangSmith offers unified evaluation and monitoring to help developers debug, evaluate, and improve their agents at scale. LangChain provides hundreds of integrations and composable components, making it easy to connect with the latest models, tools, and databases — with minimal engineering overhead. Together, these tools help teams build, deploy, and manage enterprise-grade agents, faster.






.png)

