Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product)
We have the chance to build the "Datadog" of this category; model capabilities continue to improve but building useful applications is really hard, both in startups and enterprises
Largest open source solution in this category: >2k customers, >14M monthly SDK downloads, >6M docker pulls
We raised money from world class investors (Y-Combinator, Lightspeed, General Catalyst) and are default alive by making more money than we spend
We're a small, engineering-heavy, and experienced team in Berlin and San Francisco (how we work)
We are hiring an Engineer who want to create content, documentation and engages with our global user base. You are powering our developer relations expanding Langfuse's thought leadership in the LLM ops space. You are owning this function and eventually might build a team around it.
About LangfuseOpen Source LLM Engineering Platform, help teams build useful AI applications via tracing, evaluation, and prompt management (mission, product)
Massive opportunity, chance to build the "Datadog" of this category; model capabilities continue to improve but building useful applications is really hard, both in startups and enterprises
Largest open source solution in this category: >2k customers, >14M monthly SDK downloads, >6M docker pulls
Investors: $4M Seed by Y Combinator, Lightspeed, and General Catalyst
We're a small, engineering-heavy, focused team in Berlin and San Francisco (how we work)
Make our documentation the best out there
Blog & Social posts to define how everyone should think about LLM Engineering
Write Cookbooks / Code examples that provide real value for users
Produce Videos and Memes
Build applications that help us grow in unconventional ways
Deeply engage with OSS community
Must
You are an Engineer who wants to educate through content
Technical background (Engineering or Data Science)
You have taste in good content
Extras
You already create content
You are deep in AI engineering
You are equipped to redefine what DevRel means in the age of LLMs
You have original opinions that experienced Devs value
You contribute to OSS projects
Software Engineering work experience is a big plus
Building for very sophisticated technical users
Competitive salary & equity compensation
Strong customer & revenue growth
You are joining really early
Extreme autonomy
Performance and learning focused culture
Everyone on the team says: "best place I've ever worked at"
2 short scheduled meetings per week
We can run this process in less than 7 days.
Fill out application
We screen your application
First Call: Quick intro & chat about logistics
Second Call: Founder Deepdive (40mins), remote
Third Call: Technical Deep Dive interview (60min)
Super Day ( half or full day in Office, remote in some cases)
Meet the other founders (short calls)
Decision & Offer
All repos: github.com/langfuse
Company handbook: langfuse.com/handbook
Team: https://langfuse.com/handbook/chapters/team
How we hire: langfuse.com/handbook/how-we-hire
Blog: langfuse.com/blog
Lee Robinson - built DevRel at Vercel now at cursor
Swyx - running latent space pod and writes about DevRel
Lars Grammel - building and writing about AI SDK
Max Tkacz - who built the n8n community & content machine
Ishan of Exa who shipped twitterwarpped.exa
Aymeric & prev. David creating content & courses at hugginface
Top Skills
What We Do
Langfuse is the 𝗺𝗼𝘀𝘁 𝗽𝗼𝗽𝘂𝗹𝗮𝗿 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝗢𝗽𝘀 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
Langfuse can be 𝘀𝗲𝗹𝗳-𝗵𝗼𝘀𝘁𝗲𝗱 in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance.
Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use 𝗟𝗮𝗻𝗴𝗳𝘂𝘀𝗲 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁, 𝗶𝘁𝘀 𝗼𝗽𝗲𝗻 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to improve the quality of their applications.
Product managers can 𝗮𝗻𝗮𝗹𝘆𝘇𝗲, 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗱𝗲𝗯𝘂𝗴 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring 𝗵𝘂𝗺𝗮𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀 through security framework and evaluation pipelines.
Langfuse enables 𝗻𝗼𝗻-𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿𝘀 to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing.
Langfuse is 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!
.png)







