ML Research Engineer — LLM Safety

Posted 14 Days Ago
Be an Early Applicant
Hiring Remotely in San Francisco, CA
Remote
Mid level
Artificial Intelligence • Software
The Role
The ML Research Engineer will focus on LLM safety, generating synthetic data, training and benchmarking models, and delivering scalable production code. They will lead research initiatives, co-author academic papers, and contribute to the development of safe and responsible LLMs for business applications.
Summary Generated by Built In

At Dynamo AI, we believe that LLMs must be developed with safety, privacy, and real-world responsibility in mind. Our ML team comes from a culture of academic research driven to democratize AI advancements responsibly. By operating at the intersection of ML research and industry applications, our team empowers Fortune 500 companies’ adoption of frontier research for their next generation of LLM products. Join us if you:

• Wish to work on the premier platform for private and personalized LLMs. We provide the fastest end to end solution to deploy research in the real world with our fast-paced team of ML Ph.D.’s and builders, free of Big Tech / academic bureaucracy and constraints.

• Are excited at the idea of democratizing state-of-the-art research on safe and responsible AI.

• Are motivated to work at a 2023 CB Insights Top 100 AI Startup and see your impact on end customers in the timeframe of weeks not years.

• Care about building a platform to empower fair, unbiased, and responsible development of LLMs and don’t accept the status quo of sacrificing user privacy for the sake of ML advancement.

Responsibilities

  • Own an LLM vertical with a focus on a specific safety domain, technique, or use case (either from defense or red-team attack perspective)
  • Generate high quality synthetic data, train LLMs, and conduct rigorous benchmarking.
  • Deliver robust, scalable, and reproducible production code.
  • Push the envelope by developing novel techniques and research that delivers the world’s most harmless and helpful models. Your research will directly empower our customers to more feasibly deploy safe and responsible LLMs.
  • Co-author papers, patents, and presentations with our research team by integrating other members’ work with your vertical.

Qualifications

  • Deep domain knowledge in LLM safety techniques.
  • Extensive experience in designing, training, and implementing multiple different types of LLM models and architectures in the real world. Comfortability with leading end-to-end projects.
  • Adaptability and flexibility. In both the academic and startup world, a new finding in the community may necessitate an abrupt shift in focus. You must be able to learn, implement, and extend state-of-the-art research.
  • Preferred: past research or projects in either attacking or defending LLMs.

Dynamo AI is committed to maintaining compliance with all applicable local and state laws regarding job listings and salary transparency. This includes adhering to specific regulations that mandate the disclosure of salary ranges in job postings or upon request during the hiring process. We strive to ensure our practices promote fairness, equity, and transparency for all candidates.


Salary for this position may vary based on several factors, including the candidate's experience, expertise, and the geographic location of the role. Compensation is determined to ensure competitiveness and equity, reflecting the cost of living in different regions and the specific skills and qualifications of the candidate.

Top Skills

Llm
The Company
San Francisco, CA
58 Employees
On-site Workplace
Year Founded: 2021

What We Do

Dynamo AI is pioneering the first end-to-end secure and compliant generative AI infrastructure that runs in any on-premise or cloud environment.

With a holistic approach to GenAI compliance, we help accelerate enterprise adoption to deploy secure, reliable, and compliant AI applications at scale.

Our platform includes three products:
- DynamoEval evaluates GenAI models for security, hallucination, privacy, and compliance risks.
- DynamoEnhance remediates identified risks, ensuring more reliable operations.
- DynamoGuard offers real-time guardrailing, customizable in natural language and with minimal latency

Our client base and partnerships include Fortune 1000 companies across all industries, which underscores our proven success in securing GenAI in highly regulated environments

Similar Jobs

Voltage Park Logo Voltage Park

Site Reliability Engineer

Artificial Intelligence • Cloud • Hardware • Machine Learning • Other • Software • Infrastructure as a Service (IaaS)
Remote
San Francisco, CA, USA
51 Employees
140K-180K Annually

Cash App Logo Cash App

Senior Software Engineer, Referrals and Incentives

Blockchain • Fintech • Mobile • Payments • Software • Financial Services
Remote
Hybrid
8 Locations
3500 Employees
168K-297K Annually

Liftoff Logo Liftoff

Senior Software Engineer, Production Engineering

AdTech • Big Data • Machine Learning • Marketing Tech • Mobile • Software
Remote
United States
645 Employees

eClinical Solutions Logo eClinical Solutions

Principal Full Stack Software Engineer

Cloud • Healthtech • Professional Services • Software • Pharmaceutical
Easy Apply
Remote
United States

Similar Companies Hiring

Jobba Trade Technologies, Inc. Thumbnail
Software • Professional Services • Productivity • Information Technology • Cloud
Chicago, IL
45 Employees
RunPod Thumbnail
Software • Infrastructure as a Service (IaaS) • Cloud • Artificial Intelligence
Charlotte, North Carolina
53 Employees
Hedra Thumbnail
Software • News + Entertainment • Marketing Tech • Generative AI • Enterprise Web • Digital Media • Consumer Web
San Francisco, CA
14 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account