At Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.
We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.
About the RoleForward Deployed Engineers are a customer’s trusted advisor and technical counterpart throughout the lifecycle of their AI workloads.
FDEs work across multiple areas of the organization, including software engineering, SRE, infrastructure engineering, networking, solutions architecture, and technical support.
FDEs are expected to have strong technical and interpersonal communication skills. You should be able to concisely and accurately share knowledge, in both written and verbal form, with teammates, customers, and partners.
As an FDE, your responsibilities are aligned with the success of our customers, and you’ll work side-by-side with them to deeply understand their workloads and solve some of their most pressing challenges. A day’s work may include:
Deploying clusters of 1,000+ GPUs using custom written playbooks; modifying these tools as necessary to provide the perfect solution for a customer.
Validating correctness and performance of underlying compute, storage, and networking infrastructure, and working with providers to optimize these subsystems.
Migrating petabytes of data from public cloud platforms to local storage, as quickly and cost effectively as possible.
Debugging issues anywhere in the stack, from “this server’s fan is blocked by a plastic bag” to “optimizing S3 dataloaders from buckets in different regions”.
Building internal tooling to decrease deployment time and increase cluster reliability, including automation where the customer benefits clearly outweigh the implementation overhead.
Supporting customers as part of an on-call rotation, up to two weeks per month.
A customer-centric attitude, an accountability mindset, and a bias to action.
A track record of shipping clean, well-documented code in complex environments.
An ability to create structure from chaos, navigate ambiguity, and adapt to the dynamic nature of the AI ecosystem.
Strong technical and interpersonal communication skills, a low ego, and a positive mental attitude.
An ideal candidate meets at least the following requirements:
2+ years of SWE, SRE, DevOps, Sysadmin, and/or HPC engineering experience.
Great verbal and written communication skills in English.
Experience deploying and operating Kubernetes and/or SLURM clusters.
Experience in writing Go, Python, Bash.
Experience using Ansible, Terraform, and other automation or IAC tools.
Strong engineering background, preferably in Computer Science, Software Engineering, Math, Computer Engineering, or similar fields.
Exceptional candidates have one or more of the following experiences:
You have built and operated an AI workload at 1000+ GPU scale.
You have built multi-tenant, hyperscale Kubernetes based services.
You have physically deployed infrastructure in a datacenter, managed bare metal hardware via MaaS or Netbox, etc.
You have deployed and managed multi-tenant InfiniBand or RoCE networks.
You have deployed and managed petabyte scale all-flash storage systems, including DDN, VAST, and/or Weka; or Ceph, LUSTRE, or similar open source tools.
If your application passes the screening stage, you will be invited to a 15 minute hiring manager call. If you clear the initial phone interview, you will enter the main process, which consists of three 45 minute interviews: a technical deep dive, customer communications and debugging session, and culture fit interview.
Our goal is to finish the main process within one week. All interviews will be conducted virtually.
Salary & BenefitsCompetitive total compensation package (salary + equity).
Retirement or pension plan, in line with local norms.
Health, dental, and vision insurance.
Generous PTO policy, in line with local norms.
The base salary range for this position is $100,000 - $250,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options.
We are committed to pay equity and transparency.
Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Similar Jobs
What We Do
Instantly reserve dedicated clusters of NVIDIA H200s and GB200s for any scale to supercharge your training and inference workflows.







