Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product)
We have the chance to build the "Datadog" of this category; model capabilities continue to improve but building useful applications is really hard, both in startups and enterprises
Largest open source solution in this category: >2k customers, >14M monthly SDK downloads, >6M docker pulls
We raised money from world class investors (Y-Combinator, Lightspeed, General Catalyst) and are default alive by making more money than we spend
We're a small, engineering-heavy, and experienced team in Berlin and San Francisco (how we work)
We have strong traction, are default alive, and growing fast. We’re now looking for a Forward Deployed Engineer to join our team in person in San Francisco.
As the first Forward Deployed Engineer, you’ll shape how Langfuse partners with users and customers to ensure successful onboarding, adoption, and long-term growth. You’ll work closely with founders, engineering, and GTM teams to help technical users get the most out of Langfuse.
Solution Design and success
Act as a trusted advisor to technical stakeholders from cutting edge AI companies using and scaling Langfuse.
Lead technical onboarding with Langfuse and advise on architecture, implementation like tracing set up, Evals, and general AI Engineering Life Cycle.
Provide hands-on guidance and best practices for LLM observability and evaluation.
Help customers debug issues as the main point of contact for technical and operational questions.
Track product adoption and proactively reach out to help customers increase value from Langfuse.
Translate customer needs into actionable product feedback.
Build and maintain feedback loops between users, sales, and product teams.
Help shape documentation and learning resources to reduce friction.
Build and maintain workflows for tracking adoption, success metrics, and customer health.
Document playbooks, best practices, and internal tooling for the success function.
Partner with sales and product to align priorities across the user lifecycle.
Experience in solutions engineering, solutions architect, forward deployed engineer at a SaaS/devtool company.
Strong technical aptitude, comfortable to debugging APIs, code, logs, and understands developer workflows.
Excellent communication and problem-solving skills.
Comfortable working cross-functionally in a fast-moving startup.
Willingness to work in-person in SF ~4 days/week.
You’ve used Langfuse (sign up for a free account here).
Familiarity with open-source or developer-focused products.
Experience supporting or building with LLM/AI tools.
Entrepreneurial mindset - you enjoy building new processes and solving ambiguous problems.
Work with sophisticated technical users and top engineering teams.
Competitive salary & equity.
Rapid customer and revenue growth.
Join early and shape the company’s culture and processes.
Extreme autonomy and focus on outcomes.
Team culture described by everyone as “the best place I’ve ever worked.”
Only two short scheduled meetings per week.
You’ll join our early core team, working closely with Marc (CEO, product/GTM) and Akio (Founding GTM). We prioritize in-office collaboration, learning, and ownership.
Learn more about us at langfuse.com/handbook.
Top Skills
What We Do
Langfuse is the 𝗺𝗼𝘀𝘁 𝗽𝗼𝗽𝘂𝗹𝗮𝗿 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝗢𝗽𝘀 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
Langfuse can be 𝘀𝗲𝗹𝗳-𝗵𝗼𝘀𝘁𝗲𝗱 in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance.
Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use 𝗟𝗮𝗻𝗴𝗳𝘂𝘀𝗲 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁, 𝗶𝘁𝘀 𝗼𝗽𝗲𝗻 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to improve the quality of their applications.
Product managers can 𝗮𝗻𝗮𝗹𝘆𝘇𝗲, 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗱𝗲𝗯𝘂𝗴 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring 𝗵𝘂𝗺𝗮𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀 through security framework and evaluation pipelines.
Langfuse enables 𝗻𝗼𝗻-𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿𝘀 to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing.
Langfuse is 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!









