Product Engineer (Integrations)

Posted 14 Days Ago
Be an Early Applicant
5 Locations
Hybrid
100K-150K Annually
Mid level
Artificial Intelligence • Software
Open Source LLM Engineering Platform
The Role
As a Product Engineer, you'll build and maintain integrations with various AI frameworks, design new patterns, contribute to core SDKs, and engage with the developer community to enhance the LLM application experience.
Summary Generated by Built In
About Langfuse

Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.

We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.

Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.

We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.

Previously backed by Y Combinator, Lightspeed, and General Catalyst.

We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).

Why Integrations Engineering at Langfuse

Your work puts Langfuse into developers’ hands.

Our SDKs are downloaded 26M+ times per month, and for many developers the first thing they touch is an integration — a few lines of code that connect their favorite framework to Langfuse. When that experience is seamless, they’re wow’ed. When it’s not, we might have lost them. You’ll own that critical first impression across 40+ framework integrations.

You’ll live at the frontier of LLM application development.

The AI framework ecosystem moves fast — new agent frameworks, orchestration libraries, and model providers emerge every week. You’ll be among the first to instrument them, giving you unmatched exposure to how cutting-edge AI applications are built. The developers you serve are some of the most ambitious software engineers in the world, and working closely with them will make you an expert on LLM engineering yourself.

Everything you build is open source and immediately visible.

All Langfuse integrations are MIT-licensed. When you ship a new integration or improve an existing one, thousands of developers benefit the same day — and they’ll tell you about it in GitHub issues, on Twitter, and in our community channels.

What You’ll Do
  • Build and maintain framework integrations. Langfuse integrates with 40+ frameworks and model providers: OpenAI SDK, Vercel AI SDK, LangChain, LlamaIndex, Pydantic AI, OpenAI Agents, CrewAI, Amazon Bedrock AgentCore, LiveKit, and many more. You’ll own these integrations end-to-end — from initial implementation to ongoing maintenance as frameworks evolve. When a new framework gains traction, you’ll be among the first to instrument it.

  • Design new integration patterns for emerging frameworks. When a framework or a new agent orchestrator appears, you’ll evaluate it, design the right instrumentation approach (callback handler, decorator, OTEL auto-instrumentation, or a combination), build the integration, write the docs, and ship it. You’ll develop strong opinions on what makes a great integration experience.

  • Contribute to the core SDKs. While integrations are your primary focus, your work will surface needs in our Python and TypeScript SDKs. You’ll contribute improvements to the core SDK when your integration work demands it — whether that’s a new hook point, better context propagation, or performance optimizations.

  • Write documentation and integration guides. At Langfuse, docs are part of our core product. When you ship a new integration, the guide ships with it. You’ll own the integration docs, quickstart tutorials, cookbooks, and migration paths that help developers get started in minutes.

  • Be a voice in the developer community. You’ll engage with framework communities, respond to integration-related GitHub issues, write blog posts about new integrations, and be present in our Slack/Discord channels. You’ll build relationships with framework maintainers and represent Langfuse in the broader AI developer ecosystem.

What We’re Looking For
  • Passionate about the LLM ecosystem. You’ve built real applications with frameworks like LangChain, Pydantic AI, Vercel AI SDK, LlamaIndex, or similar — or you have a deep willingness to go deep and get your hands dirty with every major framework in the space. You’re excited about this ecosystem, not just familiar with it.

  • Strong in Python and/or TypeScript. You write clean, reliable code. You don’t need to be a systems-level performance expert, but you care about code quality and understand that integrations run inside other people’s production systems.

  • Product-minded engineer. You think about the developer who’s going to use your integration at 11pm trying to ship a feature. You obsess over the getting-started experience: how many lines of code does it take? Is the error message helpful? Does the docs example actually work?

  • Self-directed and motivated. You know how to develop conviction about what to build and how to ship it. You don’t wait for detailed specs — you investigate the framework, talk to users, and propose the right approach.

  • Excited about open source and developer community. You genuinely enjoy talking to developers about their integration challenges, writing clear documentation, and contributing to open source projects.

  • Thrives in a small, accountable team. Your output is visible and matters. You’re comfortable owning outcomes, not just tasks.

CS or quantitative degree preferred, but not required. We care far more about what you’ve built and your hunger to learn.

Bonus Points
  • Experience with OpenTelemetry internals or observability instrumentation

  • Contributions to popular open source projects, SDKs, or developer tools

  • Experience building developer tooling, CLIs, or client libraries

  • Former founder or early startup experience

  • Active presence in AI/ML developer communities (blog posts, talks, open source)

No candidate checks all boxes. If you feel you are a good fit for this role, please go ahead and apply.

Projects You Could Own
  • Build and ship a Langfuse integration for an emerging agent framework (e.g., OpenClaw, new OTEL-based instrumentation)

  • Design the integration pattern for a new category of AI tools (e.g., voice agents via LiveKit/Pipecat)

  • Create comprehensive quickstart cookbooks that get developers from zero to traced in under 5 minutes

  • Work with the OpenTelemetry community to improve GenAI semantic conventions

  • Maintain and upgrade our most popular integrations (OpenAI, LangChain, Vercel AI SDK) as those frameworks ship breaking changes

Process

We can run the full process to your offer letter in less than 7 days (hiring process).

Tech Stack

We run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).

How we ship

Link to handbook

  • We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.

  • You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.

  • We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).

  • Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).

  • We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.

Why Langfuse (now part of ClickHouse)
  • This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).

  • This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.

  • Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.

  • We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.

  • You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)

  • If you wonder what to build next, our users are a Slack message or a Github discussions post away.

  • You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.

Skills Required

  • Strong in Python and/or TypeScript
  • Passionate about the LLM ecosystem
  • Self-directed and motivated
  • Excited about open source and developer community
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, California
15 Employees
Year Founded: 2022

What We Do

Langfuse is the 𝗺𝗼𝘀𝘁 𝗽𝗼𝗽𝘂𝗹𝗮𝗿 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝗢𝗽𝘀 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be 𝘀𝗲𝗹𝗳-𝗵𝗼𝘀𝘁𝗲𝗱 in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance. Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use 𝗟𝗮𝗻𝗴𝗳𝘂𝘀𝗲 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁, 𝗶𝘁𝘀 𝗼𝗽𝗲𝗻 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to improve the quality of their applications. Product managers can 𝗮𝗻𝗮𝗹𝘆𝘇𝗲, 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗱𝗲𝗯𝘂𝗴 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring 𝗵𝘂𝗺𝗮𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀 through security framework and evaluation pipelines. Langfuse enables 𝗻𝗼𝗻-𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿𝘀 to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing. Langfuse is 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!

Similar Jobs

Rubrik Logo Rubrik

Account Executive

Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Cybersecurity • Data Privacy
In-Office
Zürich, CHE
3000 Employees

Snap Inc. Logo Snap Inc.

Senior Security Engineer

Artificial Intelligence • Cloud • Machine Learning • Mobile • Software • Virtual Reality • App development
Hybrid
Zürich, CHE
5000 Employees
60K-120K Annually

Benchling Logo Benchling

Architect

Cloud • Healthtech • Social Impact • Software • Biotech
Hybrid
Zürich, CHE
605 Employees

ZS Logo ZS

Manager - Tech Consulting

Artificial Intelligence • Healthtech • Professional Services • Analytics • Consulting
Hybrid
Zürich, CHE
15000 Employees

Similar Companies Hiring

Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account