Software Engineer, Inference Platform

Posted 7 Days Ago
Be an Early Applicant
San Francisco, CA
In-Office
200K-250K Annually
Senior level
Artificial Intelligence • Software
The Role
As a Software Engineer on the Inference Platform team, you'll manage end-to-end inference deployments, improve throughput and cost efficiency, and optimize infrastructure for AI models on a large scale.
Summary Generated by Built In
About Fluidstack

At Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.

We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.

About the Role

Inference is now the defining cost and latency bottleneck for frontier AI. Fluidstack’s Inference Platform team owns the serving layer that sits between our global accelerator supply and the production workloads our customers run on it: LLM serving frameworks, KV cache infrastructure, disaggregated prefill/decode pipelines, and Kubernetes-based orchestration across multi-datacenter footprints.

This is a hands-on IC role at the intersection of distributed systems, model optimization, and serving infrastructure. You’ll own end-to-end inference deployments for frontier AI labs and our inference product, drive measurable improvements in throughput, cost-per-token, and time-to-first-token, and contribute to the platform architecture choices that determine how Fluidstack deploys across tens of thousands of accelerators.


You will:
  • Own inference deployments end-to-end: from initial configuration and performance tuning to production SLA maintenance and incident response.

  • Drive measurable improvements in throughput, TTFT, and cost-per-token across diverse model families (dense transformers, mixture-of-experts, multi-modal) and customer workload patterns.

  • Build and operate KV cache and scheduling infrastructure to maximize utilization across concurrent requests.

  • Implement and validate disaggregated prefill/decode pipelines and the Kubernetes orchestration that supports them at scale.

  • Profile and resolve bottlenecks at the compute, memory, and communication layers; instrument deployments for end-to-end observability.

  • Partner with customers to translate their model architectures, access patterns, and latency requirements into deployment configurations and upstream platform improvements.

  • Contribute to inference platform architecture and roadmap, with a focus on reducing deployment complexity, improving hardware utilization, and expanding support for new model classes and accelerators.

  • Participate in an on-call rotation (up to one week per month) to maintain the reliability and SLA commitments of production deployments.


Basic Qualifications
  • 5+ years of professional software engineering experience with a track record of shipping production-quality systems.

  • Strong programming skills in Python and/or Go.

  • Hands-on production experience with at least one LLM serving framework (vLLM, SGLang, TensorRT-LLM, TGI, or equivalent).

  • Working knowledge of PyTorch or JAX and an understanding of how model architecture choices affect inference characteristics.

  • Experience deploying and operating GPU workloads on Kubernetes at production scale, including autoscaling and resource scheduling.

  • Solid understanding of GPU memory hierarchies, compute parallelism, and the tradeoffs across tensor, pipeline, and expert parallelism strategies.

  • Ability to create structure from ambiguity and communicate technical tradeoffs clearly to both engineering peers and customers.

  • Great written and verbal communication skills in English.


Preferred Qualifications
  • Production experience with disaggregated prefill/decode architectures (NVIDIA Dynamo, LLM-d, or equivalent), including scheduling policies and network fabric configuration.

  • Deep familiarity with KV cache strategies: RadixAttention, slab-based memory allocators, cross-request prefix sharing, and cache-aware scheduling.

  • Experience with multi-node GPU inference across InfiniBand or RoCE fabrics, including NCCL collective communication tuning.

  • Custom kernel or operator development experience (e.g., CUDA, Triton, torch.compile, Pallas, or equivalent)

  • Contributions to open-source inference engines (vLLM, SGLang, TGI, TensorRT-LLM, or similar).

  • Hands-on experience with quantization tooling: GPTQ, AWQ, FP8 via llm-compressor, or AutoGPTQ.

  • Knowledge of speculative decoding implementations (Medusa, EAGLE-3, draft-model approaches) and their performance/quality tradeoffs.

  • Experience optimizing and adapting model implementations for non-NVIDIA accelerators and their ecosystems: AMD, TPU, Trainium/Inferentia, Cerebras, Groq, and other custom ASICs.


Salary & Benefits
  • Competitive total compensation package (salary + equity).

  • Retirement or pension plan, in line with local norms.

  • Health, dental, and vision insurance.

  • Generous PTO policy, in line with local norms.

The base salary range for this position is $165,000 – $500,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options.

We are committed to pay equity and transparency.

Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email [email protected] with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.

Top Skills

Cuda
Go
Jax
Kubernetes
Nccl
Python
PyTorch
Triton
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: London
30 Employees
Year Founded: 2017

What We Do

Instantly reserve dedicated clusters of NVIDIA H200s and GB200s for any scale to supercharge your training and inference workflows.

Similar Jobs

MongoDB Logo MongoDB

Senior Software Engineer

Big Data • Cloud • Software • Database
Easy Apply
Hybrid
Palo Alto, CA, USA
5550 Employees
126K-248K Annually

Snowflake Logo Snowflake

Staff Software Engineer

Artificial Intelligence • Big Data • Cloud • Machine Learning • Software • Database • Analytics
In-Office
Menlo Park, CA, USA
9023 Employees
236K-339K Annually
In-Office
Los Angeles, CA, USA
33 Employees
120K-160K Annually
In-Office
Los Angeles, CA, USA
33 Employees
140K-180K Annually

Similar Companies Hiring

Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account