Your Contributions
- Research, design, and prototype novel applications of Large Language Models (LLMs) and Vision Language Models (VLMs) to support autonomous industrial inspection workflows (e.g., anomaly explanation, report generation, or multi-modal data interpretation).
- Collaborate with cross-functional teams to integrate LLM-based image analysis and interaction capabilities into our ANYmal robots for enhanced inspection intelligence.
- Support the development and evaluation of LLM-powered tools through simulation, testing, and user validation.
Your Profile
- Pursuing a Bachelor’s or Master’s degree in computer science, machine learning, or a related field.
- Programming experience (preferably Python and/or C++) with interest in real-time
- Familiarity with modern LLM frameworks (e.g., OpenAI, Hugging Face Transformers), Vision Transformers, and prompt engineering principles.
- Basic understanding of robotics systems and multi-modal sensor data is a plus.
- Ability to work independently in an applied research-driven team environment, with a proactive and analytical mindset.
- Strong communication skills in English, both written and verbal.
Bonus Points
- Experience with cloud-based solutions.
- Experience in robotics or hardware products.
- Experience with Restful APIs.
Similar Jobs
What We Do
ANYbotics is a Swiss robotics company pioneering the development of autonomous mobile robotics. Our walking robots move beyond conventional, purpose-built environments and solve customer problems in challenging infrastructure so far only accessible to humans. Founded in 2016 as a spin-off from the world-leading robotics labs at ETH Zurich. Join our highly talented and motivated team of more than 100 people and work on cutting-edge robot technology. Our customers include leading international energy, industrial processing, and construction companies. In 2020, ANYbotics raised CHF 20 m in a Series A financing round and won several prizes, including the Swiss Economic Forum 2020 award.









