Senior Forward Deployed Engineer

Posted 11 Days Ago
Be an Early Applicant
6 Locations
Hybrid
Senior level
Artificial Intelligence • Software
Open Source LLM Engineering Platform
The Role
Serve as the primary technical partner for 10–20 strategic accounts: onboard, drive production readiness, provide architecture guidance for LLM systems, lead escalations, build prototypes and reusable templates, and convert customer signals into product, docs, and GTM enablement.
Summary Generated by Built In
About Langfuse

Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.

We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.

Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.

We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.

Previously backed by Y Combinator, Lightspeed, and General Catalyst.

We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).

Your impact
  • Make our best customers successful in production and expanding over time.

  • Improve net revenue retention via adoption, outcomes, and proactive risk management.

  • Scale your impact to our large user base and OSS community by contributing to documentation, guides, and other public content.

  • Create a tight loop from “what customers do” (your deep understanding of top customers) → “what we should build” (feedback to the product engineering team) → “how the GTM org explains it.” (GTM enablement).

What you’ll do1) Own strategic customer relationships (portfolio ownership)
  • Be the primary technical partner for 10–20 strategic accounts (large, highly engaged, or aligned with our roadmap).

  • Run onboarding, success planning, and regular deep dives into the customer’s AI architecture and workflows.

  • Drive adoption of key product capabilities across the lifecycle: initial setup, team workflows, scaling, and expansion.

2) Production readiness + architectural guidance
  • Lead customers through production readiness: instrumentation strategy, data modeling choices, evaluation setup, alerting/monitoring expectations, security & privacy considerations, and operational playbooks.

  • Provide pragmatic architecture guidance for real LLM systems (agents, tool use, RAG, evals, prompt iteration, dataset curation, feedback loops).

  • Build small prototypes, reference implementations, and demos when it unblocks a customer. Turn them into reusable templates that can be published.

3) Escalation leadership
  • Own the technical leadership during high-severity customer moments: triage, root-cause coordination, and crisp communication.

  • Be the point of contact for the customer and partner closely with Engineering, be proactive in how you resolve issues.

  • Establish escalation paths, runbooks, and prevention mechanisms for repeat issues.

4) Turn customer signal into product + docs + enablement
  • Aggregate patterns across your portfolio and translate them into actionable product feedback (clear problem statements, impact, and recommended solutions).

  • Create customer-facing assets (docs, guides, best practices, demos) that start as one customer’s question and become durable collateral.

  • Enable the broader ClickHouse GTM org: training, playbooks, crisp messaging, and “how to win” narratives for AI engineering teams.

What we’re looking forMust-haves
  • Senior experience in a customer-facing technical role: TAM, Solutions Engineer, Solutions Architect, Forward Deployed Engineer, Customer Success Engineer, or similar where you owned outcomes.

  • Strong technical foundation: you can debug integrations, reason about distributed systems, APIs/SDKs, and cloud infrastructure.

  • Demonstrated work in applied AI / AI engineering: building, operating, or enabling LLM applications (agents, RAG, eval pipelines, prompt tooling, experimentation).

  • Excellent communication: you can lead technical meetings, drive decisions, and write docs engineers actually follow.

  • High ownership: you ship artifacts, close loops, and create repeatable systems rather than bespoke one-offs.

Nice-to-haves
  • Experience with devtools / OSS ecosystems and developer-centric GTM.

  • Familiarity with observability concepts (tracing/metrics/logs), data pipelines, and evaluation frameworks.

  • Track record of technical writing or enablement (workshops, reference architectures, public docs).

Process

We can run the full process to your offer letter in less than 7 days (hiring process).

Tech Stack

We run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).

How we ship

Link to handbook

  • We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.

  • You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.

  • We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).

  • Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).

  • We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.

Why Langfuse (now part of ClickHouse)
  • This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).

  • This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.

  • Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.

  • We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.

  • You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)

  • If you wonder what to build next, our users are a Slack message or a Github discussions post away.

  • You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.

Top Skills

Typescript,Next.Js,Express,Postgresql,Clickhouse,Amazon S3,Redis,Docker,Apis,Sdks,Llms,Rag,Agents,Prompt Tooling,Evaluation Frameworks
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, California
15 Employees
Year Founded: 2022

What We Do

Langfuse is the 𝗺𝗼𝘀𝘁 𝗽𝗼𝗽𝘂𝗹𝗮𝗿 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝗢𝗽𝘀 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications.

Langfuse can be 𝘀𝗲𝗹𝗳-𝗵𝗼𝘀𝘁𝗲𝗱 in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance.

Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use 𝗟𝗮𝗻𝗴𝗳𝘂𝘀𝗲 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁, 𝗶𝘁𝘀 𝗼𝗽𝗲𝗻 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to improve the quality of their applications.

Product managers can 𝗮𝗻𝗮𝗹𝘆𝘇𝗲, 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗱𝗲𝗯𝘂𝗴 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring 𝗵𝘂𝗺𝗮𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀 through security framework and evaluation pipelines.

Langfuse enables 𝗻𝗼𝗻-𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿𝘀 to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing.

Langfuse is 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!

Similar Jobs

Snap Inc. Logo Snap Inc.

Security Engineer

Artificial Intelligence • Cloud • Machine Learning • Mobile • Software • Virtual Reality • App development
Hybrid
Zürich, CHE
5000 Employees

Perk Logo Perk

Senior Software Engineer

Artificial Intelligence • Fintech • Greentech • Sales • Software • Travel • Hospitality
Hybrid
Zürich, CHE
1800 Employees

Scaled Agile, Inc. Logo Scaled Agile, Inc.

User Experience Designer

Artificial Intelligence • Edtech • Productivity • Business Intelligence • Consulting
In-Office or Remote
Zürich, CHE
132 Employees

ZS Logo ZS

Supply Chain & Manufacturing Manager - Chemistry, Manufacturing, and Controls (CMC)

Artificial Intelligence • Healthtech • Professional Services • Analytics • Consulting
Hybrid
4 Locations
13000 Employees

Similar Companies Hiring

Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account