Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.
We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.
Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.
We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.
Previously backed by Y Combinator, Lightspeed, and General Catalyst.
We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).
Why Growth Engineering at LangfuseYour work will have an outsized impact. Langfuse is growing on an exponential curve while we are adding more users. Our existing users also increase their token use and build more LLM based applications and features.
We sit at the center and enable our users to build LLM applications faster, more reliably and safer.
It’s your chance to jump on the curve and increase its slope directly through any wild ideas you have. Be at the forefront of agentic development and show the world how to use Langfuse to develop even faster.
Maybe you believe the future users of Langfuse will be agents and most growth stems from them? Amazing - run an experiment to validate this hypothesis.
You have real data to work with in terms of analytics and users who are the most agile and up-to-date developers building out AI applications - the perfect environment to try big bets and shape them to success. All while learning how the best agentic applications are built.
Your impactYour work directly impacts our most important growth metrics:
Top of Funnel: More customers want to try Langfuse
Activation: Customers see faster time to value
Adoption: More customers adopt all features across our product suite
Customers require less handholding because the product explains it all
Expansion: help customers expand from one product area to all of our offering
You build and maintain our data stack (Posthog, BigQuery, DBT, Metabase) to generate insights into user behavior
Map our core funnel for each feature and make it easy to monitor (we are using posthog for this)
You are the advocate within the team for how customers actually adopt/use our product
Ship constant improvements that reduce onboarding friction for users
Experiment with changes to signup flows
Collaborate with feature focused Product Engineers and Designers
Build loops to help customer expand into all of our product areas
Experiment with loops that make people tweet about langfuse (like langfuse wrapped but better)
Experiment with features that turn our customers accounts into user generated content (e.g.. let customers share their prompts in langfuse in a public prompt library)
Jump on hot topics to integrate into Langfuse (e.g. ship an OpenClaw integration) and write about it
Drive to learn fresh things - you experiment on many different topics and come up with ideas
A hang for business - you envision how Langfuse succeeds through product lead growth
Experience in full-stack development - you can ship changes that touch FE and BE in our product with confidence (frontend overhang and collaborate on backend)
SQL skills (for analytics)
You are excited to talk to and think like our users
Experience with devtools / OSS ecosystems and developer-centric go to market
Taste for UX / Design big plus
Familiarity with AI/LLM Engineering
Former Founders are very welcome for this role!
We are big fans of PostHog, check out how Engineers on their Growth team are working
See how we work in our Handbook, especially “Who are our customers?”
We can run the full process to your offer letter in less than 7 days (hiring process).
Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).
How we shipLink to handbook
We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.
You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.
We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).
Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).
We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.
This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).
This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.
Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.
We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.
You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)
If you wonder what to build next, our users are a Slack message or a Github discussions post away.
You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.
Top Skills
What We Do
Langfuse is the 𝗺𝗼𝘀𝘁 𝗽𝗼𝗽𝘂𝗹𝗮𝗿 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝗢𝗽𝘀 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be 𝘀𝗲𝗹𝗳-𝗵𝗼𝘀𝘁𝗲𝗱 in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance. Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use 𝗟𝗮𝗻𝗴𝗳𝘂𝘀𝗲 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁, 𝗶𝘁𝘀 𝗼𝗽𝗲𝗻 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to improve the quality of their applications. Product managers can 𝗮𝗻𝗮𝗹𝘆𝘇𝗲, 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗱𝗲𝗯𝘂𝗴 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring 𝗵𝘂𝗺𝗮𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀 through security framework and evaluation pipelines. Langfuse enables 𝗻𝗼𝗻-𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿𝘀 to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing. Langfuse is 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!









