Forward Deployed Engineer (Inference & Post-Training)

Posted 5 Hours Ago
Be an Early Applicant
San Francisco, CA, USA
In-Office
270K-300K Annually
Senior level
Artificial Intelligence • Information Technology
The Role
As a Forward Deployed Engineer, you will optimize inference engines and guide customers in deploying AI models effectively, ensuring high-value outcomes and driving product improvements.
Summary Generated by Built In
About the role

As a Forward Deployed Engineer (FDE) focused on Inference & Post-Training, you will be a hands-on technical partner to our most strategic customers — production AI teams looking to leverage high quality models and do inference at scale. For us, FDE is not a replacement for a Solutions Architect; you will partner with our SAs as a deep-domain specialist in inference optimization, fine-tuning pipelines, and production deployment. As key contributors to both the CX, Engineering, and Sales organizations, FDEs add tremendous value by ensuring we can meet the requirements of our most complex POCs, facilitate successful platform adoption, and guide tailored optimization efforts — directly impacting customer success, company growth, and the hardening of our core platform.

Responsibilities
  • Inference Engine Optimization: Select, configure, and optimize inference engine based on hardware, model architecture, and workload profile
  • Configuration & Performance Tuning: Develop configuration updates to win critical POCs, benchmarks, and optimize customer deployments; tune KV cache, apply speculative decoding, determine optimal tensor parallelism, and determine quantization strategy to hit throughput and latency targets.
  • Post-Training & Fine-Tuning: Drive hands-on RL training runs and optimize system design; guide customers through LoRA, SFT, DPO, RLHF, and GRPO pipelines from experimentation through production.
  • Strategic Customer Alignment: Act as the primary technical point of contact for aligned strategic accounts — monitoring and optimizing endpoint configurations, helping customers get the most out of the platform, and collaborating to ensure we hit critical milestones.
  • Opinionated Onboarding: Establish direct alignment with strategic customers at onboarding; ensure the right inference and post-training configurations are in place from day one to improve time-to-value.
  • Product Feedback Loop: Directly influence our software and model roadmap by surfacing insights from the field. Contribute back to the product where needed to support customer requirements or drive a better experience. Drive early feature and research adoption with strategic logos.
Qualifications
  • Experience: 5+ years in a technical role, with a strong focus on inference systems, open-source LLM deployment, or post-training workflows.
  • Inference Engine Depth: Expert-level, hands-on experience with inference engines (e.g., vLLM, TensorRT-LLM, SGLang); ability to diagnose and resolve performance issues at the engine level.
  • Inference Optimization: Deep knowledge of KV cache tuning, speculative decoding, tensor parallelism, pipeline parallelism, and quantization techniques
  • Post-Training Knowledge: Hands-on experience with fine-tuning and post-training pipelines, including LoRA, SFT, DPO, RLHF, and GRPO; ability to advise on system design
  • Model Landscape Awareness: Broad knowledge of state-of-the-art open-source models and strong judgment on model selection for specific customer use cases, hardware profiles, and performance targets.
  • Coding Proficiency: Strong Python skills; comfortable working in production environments
About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancements such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers on our journey in building the next generation of AI infrastructure. 

Compensation

We offer competitive compensation, startup equity, health insurance, and other benefits, as well as flexibility in terms of remote work. The US base salary range for this full-time position is: $270,000 - $300,000 OTE + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. 

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Please see our Privacy Policy at https://www.together.ai/privacy

Skills Required

  • 5+ years in a technical role focused on inference systems or open-source LLM deployment
  • Expert-level experience with inference engines
  • Deep knowledge of KV cache tuning, speculative decoding, and quantization techniques
  • Hands-on experience with fine-tuning and post-training pipelines
  • Strong Python skills
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
San Francisco, California
84 Employees
Year Founded: 2022

What We Do

Together AI is a research-driven artificial intelligence company. We contribute leading open-source research, models, and datasets to advance the frontier of AI. Our decentralized cloud services empower developers and researchers at organizations of all sizes to train, fine-tune, and deploy generative AI models. We believe open and transparent AI systems will drive innovation and create the best outcomes for society

Similar Jobs

Airwallex Logo Airwallex

Executive Assistant

Artificial Intelligence • Fintech • Payments • Business Intelligence • Financial Services • Generative AI
Remote or Hybrid
San Francisco, CA, USA
2000 Employees

Tapestry - Coach and Kate Spade Logo Tapestry - Coach and Kate Spade

Sales Associate II

eCommerce • Fashion • Other • Retail • Sales • Wearables • Design
Remote or Hybrid
South Coast, CA, USA
16000 Employees
15-24 Hourly

Zeta Global Logo Zeta Global

Lead Software Engineer

AdTech • Artificial Intelligence • Marketing Tech • Software • Analytics
Easy Apply
Hybrid
San Francisco, CA, USA
2429 Employees
150K-200K Annually

Zeta Global Logo Zeta Global

Lead Software Engineer

AdTech • Artificial Intelligence • Marketing Tech • Software • Analytics
Easy Apply
Remote or Hybrid
United States
2429 Employees
150K-200K Annually

Similar Companies Hiring

Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Golden Pet Brands Thumbnail
Digital Media • eCommerce • Information Technology • Marketing Tech • Pet • Retail • Social Media
El Segundo, California
178 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account