Machine Learning Operations Lead

Posted 3 Days Ago
Be an Early Applicant
San Francisco, CA
In-Office
160K-280K Annually
Senior level
Artificial Intelligence • Information Technology
The Role
Lead MLOps engineering to optimize ML infrastructure, manage teams, ensure service reliability, and enhance operational processes for AI model deployment.
Summary Generated by Built In
About the Role

Together AI is building the AI Inference & Model Shaping Platform that brings the most advanced generative AI models to the world. Our platform powers multi-tenant serverless workloads and dedicated endpoints, enabling developers, enterprises, and researchers to harness the latest LLMs, multimodal models, image, audio, video, and reasoning models at scale.

We are looking for an exceptional MLOps Engineering Lead to partner closely with our cross-functional engineering, infrastructure, research, and sales teams to ensure excellence of our ML API offerings. Your primary focus will be on delivering world-class inference and fine-tuning in our public APIs and customer deployments by building automation and operations processes.

This role is ideal for a highly motivated and technically adept individual who excels in fast-paced, dynamic environments. You will be in charge of designing and scaling our ML processes & tooling at production scale – optimizing operations to ensure availability and reliability for our services, across differing tenants and user loads, and in a multi-cluster deployment. You will serve as a passionate advocate for internal and external customers, providing feedback to the wider engineering and infrastructure teams to improve our systems and core business metrics. If you thrive in a collaborative, problem-solving environment and are driven to deliver operational excellence, we encourage you to apply for this exciting opportunity.

Responsibilities
  • Own availability and performance SLAs for production inference and fine-tuning services across serverless and dedicated deployments
  • Own & improve testing, deployment, configuration management, and monitoring practices for multi-cluster ML infrastructure – partnering closely with Infra SREs
  • Build self-serve tooling and automation to reduce operational toil and enable internal users (MLOps, customer experience) and self-serve offerings
  • Define and enforce configuration best practices for inference engines (vLLM, tvLLM, Pulsar) to prevent runtime issues
  • Lead incident response, conduct postmortems, and drive reliability improvements
  • Hire, mentor, and grow an MLOps engineering team
  • Partner with infrastructure and ML engineering teams to improve system reliability and cost efficiency
Requirements
  • 5+ years operating production ML inference or training systems at scale
  • 2+ years leading engineering teams, with experience building teams from scratch
  • Deep expertise with Kubernetes, multi-cluster orchestration, and ML serving frameworks
  • Strong track record owning production SLAs (e.g. availability, TTFT, TPS) 
  • Experience with LLM inference serving systems (vLLM, TRT-LLM, or similar)
  • Ability to influence cross-functional teams and make deployment/architecture decisions
Nice to Have
  • Experience building internal developer platforms or self-serve tooling
  • Background in cost optimization for GPU infrastructure
  • Contributions to open-source ML infrastructure projects
Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Please see our privacy policy at https://www.together.ai/privacy  

Top Skills

Kubernetes
Ml Serving Frameworks
Pulsar
Tvllm
Vllm
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
San Francisco, California
84 Employees
Year Founded: 2022

What We Do

Together AI is a research-driven artificial intelligence company. We contribute leading open-source research, models, and datasets to advance the frontier of AI. Our decentralized cloud services empower developers and researchers at organizations of all sizes to train, fine-tune, and deploy generative AI models. We believe open and transparent AI systems will drive innovation and create the best outcomes for society

Similar Jobs

Wells Fargo Logo Wells Fargo

Operations Manager

Fintech • Financial Services
Hybrid
9 Locations
213000 Employees
27-41 Hourly

Wells Fargo Logo Wells Fargo

Branch Support Manager

Fintech • Financial Services
Hybrid
2 Locations
213000 Employees
92K-145K Annually
Hybrid
9 Locations
213000 Employees
23-31 Hourly
Hybrid
7 Locations
213000 Employees
38-67 Hourly

Similar Companies Hiring

Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account