VP, Software Engineering

Posted 2 Days Ago
Be an Early Applicant
San Francisco, CA
In-Office
280K-450K Annually
Senior level
Artificial Intelligence • Software
The Role
As VP of Engineering at Fluidstack, you will lead the engineering organization, focusing on AI infrastructure, Kubernetes, SLURM, and scalability while ensuring engineering excellence and team growth.
Summary Generated by Built In
About Fluidstack

At Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.

We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.

About the Role

As VP of Software Engineering, you will own the full software and SRE organizations responsible for our managed orchestration (Kubernetes and SLURM) offerings as well as our managed inference services. You will set the technical direction, build and scale the team, and personally drive architectural decisions that determine how the world's leading AI organizations train and serve their models.

You still ship production systems at scale and can go deep on a kernel scheduler, NCCL collective, or KV cache implementation when it matters. You think in terms of systems boundaries, failure modes, and second-order effects. You know how to grow engineering organizations without losing velocity. You ensure we strike the right balance between fast delivery and reliable operation.

You Will
  • Own and scale the engineering organization across managed Kubernetes and SLURM, as well as our managed inference product, including Software Engineers and SREs across all three product areas.

  • Set the technical and architectural roadmap for cluster orchestration and AI inference serving, from bare-metal provisioning through control-plane design and developer-facing APIs.

  • Drive reliability, performance, and scalability standards across the stack, owning SLAs for customers running production AI training and inference workloads on Fluidstack infrastructure.

  • Partner closely with Product, Sales, and Customer Success to translate customer needs from top AI labs and enterprises into concrete engineering investments and prioritization decisions.

  • Establish engineering culture, hiring bar, and operational practices that attract and retain exceptional talent in a competitive market.

  • Remain hands-on at the level of design reviews, architecture decisions, and critical incident response, maintaining deep technical credibility with the team.

  • Build and maintain a high-trust, high-accountability team environment where engineers own outcomes end-to-end, from design through production operations.

Basic Qualifications
  • 10+ years of software engineering or systems engineering experience, with at least 4 years managing engineering teams including both Software Engineers and SREs.

  • Deep hands-on experience with Kubernetes and SLURM in production environments, including scheduling internals, resource management, and multi-tenant cluster operations.

  • Strong background in bare-metal infrastructure and GPU/accelerator systems, including server imaging, networking (InfiniBand/RoCE), firmware, and hardware lifecycle management.

  • Demonstrated ability to build and scale AI inference serving infrastructure, including familiarity with inference optimization techniques (quantization, continuous batching, speculative decoding, KV cache management).

  • Track record of building and growing high-performing engineering organizations of 40+ engineers across complex, cross-functional domains.

  • Strong communicator who can represent technical strategy to executive leadership, customers, and board-level stakeholders.

Preferred Qualifications
  • Prior experience in an AI infrastructure neocloud, hyperscaler (AWS, GCP, Azure), or AI lab (OpenAI, Anthropic, DeepMind) in a senior technical or engineering leadership role.

  • Hands-on experience with large-scale GPU cluster operations: multi-node training job scheduling, collective communication tuning, topology-aware placement, and fault recovery.

  • Familiarity with frontier model inference serving frameworks (vLLM, TensorRT-LLM, SGLang) and the systems-level tradeoffs involved in latency, throughput, and cost optimization.

  • Experience with GPU NPI processes, cluster bring-up, and hardware qualification at scale.

  • Exposure to agentic inference workloads and the distinct systems requirements they impose relative to batch or streaming inference.

  • Contributions to open-source infrastructure projects in the Kubernetes, SLURM, or MLOps ecosystems.

Salary and Benefits

The base salary range for this role is $280,000 to $450,000. Starting salary will be determined based on relevant experience, skills, and market location. In addition to base salary, this role includes a meaningful equity package, performance bonus, and the following benefits:

  • Competitive total compensation package (cash + equity)

  • Health, dental, and vision insurance

  • Retirement plan

  • Generous PTO policy

We are committed to pay equity and transparency.

Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email [email protected] with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.

Top Skills

Gpu
Infiniband
Kubernetes
Nvme
Roce
Slurm
Tensorrt-Llm
Vllm
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: London
30 Employees
Year Founded: 2017

What We Do

Instantly reserve dedicated clusters of NVIDIA H200s and GB200s for any scale to supercharge your training and inference workflows.

Similar Jobs

In-Office
San Francisco, CA, USA
40 Employees
250K-320K Annually
In-Office
2 Locations
72000 Employees
258K-496K Annually
In-Office
San Jose, CA, USA
1612 Employees
222K-326K Annually
In-Office
San Diego, CA, USA
1400 Employees
208K-395K Annually

Similar Companies Hiring

Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account