Join Voyfai, a fast-growing Series A startup shaping the future of logistics and technology across Europe.
Headquartered in Berlin, we operate at the center of a broader group of companies active across several European markets. This is a unique opportunity to take ownership from day one, drive meaningful impact and grow within a dynamic, fast-paced environment. Whether you're shaping processes, building strategies or leading initiatives, your contributions will directly influence the success of Voyfai and the companies we support. If you're passionate about building from scratch and excited to be part of a company on the rise, we'd love to hear from you.
Your Role
As a Senior AI Product Engineer at Voyfai, you will own LLM pipelines that process real logistics documents and data in production, in multiple languages and formats. You will work closely with Product and Engineering to turn complex, unstructured problems into robust, production-ready AI solutions.
This role is hands-on and impact-driven. You will take end-to-end ownership from use case discovery and prompt design to deployment, monitoring and continuous improvement in production, with a strong focus on real-world reliability and business impact.
Key Responsibilities
Design and own LLM pipelines that process real logistics documents and data in production: prompt design, structured output validation, eval harnesses, retries, fallbacks, monitoring, alerting and cost tracking
Decide when to use an LLM and when not to. A lot of good engineering here is knowing when a regex, a lookup table, or deterministic code is the right answer
Build the evaluation infrastructure: test datasets, regression checks, quality metrics tied to business outcomes
Pick models pragmatically across providers (Claude, GPT, Gemini, open-weights) based on cost, latency and quality tradeoffs for each use case
Stay ahead of the curve on prompting techniques and apply them pragmatically to solve hard extraction problems
Collaborate with Product to scope ambiguous problems into something an AI pipeline can actually solve reliably
Raise the bar on code quality, testing and documentation across the team
What You Bring
Several years as a software engineer shipping production systems, with at least 3 years focused on LLM-powered features in production
Strong backend fundamentals: PostgreSQL, async workflows (we use Temporal), queues, observability, CI/CD
Comfort working in both Python (AI pipelines) and TypeScript/Node.js (our product stack)
Deep practical understanding of LLM behaviour: prompting, structured outputs, context window management, common failure modes, handling non-determinism and writing evals that catch regressions before users do
Pragmatic judgment about architecture and tradeoffs. You have seen complex systems get simpler over time
What We Offer
Competitive salary and equity options
A dynamic, fast-paced work environment with a mission-driven team
Flexible working arrangements (Hybrid)
30 days of PTO
Regular team events and offsites
Monthly mobility budget via Navit
Skills Required
- Several years of experience as an AI Engineer, Machine Learning Engineer, or similar role, with hands-on focus on LLMs and applied GenAI in production
- Strong understanding of LLM behavior, prompting techniques, evaluation strategies, and common failure modes
- Solid Python skills and experience with GenAI frameworks or tooling such as PyTorch, LangChain, LlamaIndex, or similar
- Experience building production-grade GenAI systems such as RAG pipelines, agents or tool-using models
- Experience deploying and operating AI systems in cloud environments, including cost and performance optimization
- Familiarity with LLMOps or MLOps practices such as observability, testing and lifecycle management
- Pragmatic, impact-driven mindset with the ability to communicate complex AI concepts clearly
What We Do
Putting the power of scale into every freight forwarder’s hands.


.png)





