Responsibilities
- Work at the intersection of ML and Engineering for delivering robust and scalable implementation of LLM safety techniques and evaluations.
- Perform rigorous testing of the product features and ensure seamless integrations with customers’ AI workflows.
- Push the envelope by implementing novel techniques that delivers the world’s most harmless and helpful models. Your work will directly empower our customers to more feasibly deploy safe and responsible LLMs.
- Work closely with our policy, product, and engineering teams to ship features to customers.
Qualifications
- Deep domain knowledge in LLM safety and/or evaluation techniques.
- Extensive experience in designing, implementing, and maintaining production-ready ML code for LLMs. Past experience on integrating LLM APIs in production environments.
- Comfortability with leading end-to-end projects and collaborating with researchers and engineers.
- Adaptability and flexibility. In both the academic and startup world, a new finding in the community may necessitate an abrupt shift in focus. You must be able to learn, implement, and extend state-of-the-art research in short time-frames.
- Preferred: Past engineering experience with strong LLM knowledge.
Top Skills
What We Do
Dynamo AI is pioneering the first end-to-end secure and compliant generative AI infrastructure that runs in any on-premise or cloud environment.
With a holistic approach to GenAI compliance, we help accelerate enterprise adoption to deploy secure, reliable, and compliant AI applications at scale.
Our platform includes three products:
- DynamoEval evaluates GenAI models for security, hallucination, privacy, and compliance risks.
- DynamoEnhance remediates identified risks, ensuring more reliable operations.
- DynamoGuard offers real-time guardrailing, customizable in natural language and with minimal latency
Our client base and partnerships include Fortune 1000 companies across all industries, which underscores our proven success in securing GenAI in highly regulated environments








