Senior Backend Engineer, Inference Platform

Reposted Yesterday
Easy Apply
Be an Early Applicant
San Francisco, CA
In-Office
160K-250K Annually
Senior level
Artificial Intelligence • Information Technology
The Role
The Senior Backend Engineer will optimize and build systems for the Inference Platform, focusing on low-latency, scalability, and performance for generative AI models. Responsibilities include enhancing request routing, developing auto-scaling systems, and collaborating with ML researchers to implement new model architectures in production.
Summary Generated by Built In
About the Team

Together AI is building the Inference Platform that brings the most advanced generative AI models to the world. Our platform powers multi-tenant serverless workloads and dedicated endpoints, enabling developers, enterprises, and researchers to harness the latest LLMs, multimodal models, image, audio, video, and speech models at scale.

If you get a thrill from optimizing latency down to the last millisecond, this is your playground. You’ll work hands-on with tens of thousands of GPUs (H100s, H200s, GB200s, and beyond), figuring out how to fully utilize every FLOP and every gigabyte of memory.

You’ll collaborate directly with research teams to bring frontier models into production, making breakthroughs usable in the real world. Our team also works closely with the open source community, contributing to and leveraging projects like SGLang, vLLM, and NVIDIA Dynamo to push the boundaries of inference performance and efficiency.

Some of what you’ll work on
  • Build and optimize global and local request routing, ensuring low-latency load balancing across data centers and model engine pods.
  • Develop auto-scaling systems to dynamically allocate resources and meet strict SLOs across dozens of data centers.
  • Design systems for multi-tenant traffic shaping, tuning both resource allocation and request handling — including smart rate limiting and regulation — to ensure fairness and consistent experience across all users.
  • Engineer trade-offs between latency and throughput to serve diverse workloads efficiently.
  • Optimize prefix caching to reduce model compute and speed up responses.
  • Collaborate with ML researchers to bring new model architectures into production at scale.
  • Continuously profile and analyze system-level performance to identify bottlenecks and implement optimizations.
What We’re Looking For
  • 5+ years of demonstrated experience building large-scale, fault-tolerant, distributed systems and API microservices.
  • Strong background in designing, analyzing, and improving efficiency, scalability, and stability of complex systems.
  • Excellent understanding of low-level OS concepts: multi-threading, memory management, networking, and storage performance.
  • Expert-level programming in one or more of: Rust, Go, Python, or TypeScript.
  • Knowledge of modern LLMs and generative models and how they are served in production is a plus.
  • Experience working with the open source ecosystem around inference is highly valuable; familiarity with SGLang, vLLM, or NVIDIA Dynamo will be especially handy.
  • Experience with Kubernetes or container orchestration is a strong plus.
  • Familiarity with GPU software stacks (CUDA, Triton, NCCL) and HPC technologies (InfiniBand, NVLink, MPI) is a plus.
  • Bachelor’s or Master’s degree in Computer Science, Computer Engineering, or related field, or equivalent practical experience.
Why Join Us?
  • Shape the core inference backbone that powers Together AI’s frontier models.
  • Solve performance-critical challenges in global request routing, load balancing, and large-scale resource allocation.
  • Work with state-of-the-art accelerators (H100s, H200s, GB200s) at global scale.
  • Partner with world-class researchers to bring new model architectures into production.
  • Collaborate with and contribute to the open source community, shaping the tools that advance the industry.
  • A culture of deep technical ownership and high impact — where your work makes models faster, cheaper, and more accessible.
  • Competitive compensation, equity, and benefits.
About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $250,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Please see our privacy policy at https://www.together.ai/privacy  

Top Skills

Cuda
Go
Nccl
Python
Rust
Triton
Typescript
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
San Francisco, California
84 Employees
Year Founded: 2022

What We Do

Together AI is a research-driven artificial intelligence company. We contribute leading open-source research, models, and datasets to advance the frontier of AI. Our decentralized cloud services empower developers and researchers at organizations of all sizes to train, fine-tune, and deploy generative AI models. We believe open and transparent AI systems will drive innovation and create the best outcomes for society

Similar Jobs

Block Logo Block

Product Lead, Buy Now Pay Later

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
In-Office or Remote
8 Locations
12000 Employees
168K-297K Annually

Cash App Logo Cash App

Product Lead, Buy Now Pay Later

Blockchain • Fintech • Mobile • Payments • Software • Financial Services
Remote or Hybrid
8 Locations
3500 Employees
168K-297K Annually

Motorola Solutions Logo Motorola Solutions

LMR Technician

Artificial Intelligence • Hardware • Information Technology • Security • Software • Cybersecurity • Big Data Analytics
Remote or Hybrid
California, USA
23000 Employees
65K-75K Annually

Square Logo Square

Account Executive

eCommerce • Fintech • Hardware • Payments • Software • Financial Services
Remote or Hybrid
2 Locations
12000 Employees
123K-223K Annually

Similar Companies Hiring

Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account