Software Engineer — GPU Networking & Distributed Systems

Reposted 7 Days Ago
Be an Early Applicant
2 Locations
Hybrid
165K-330K Annually
Mid level
Software
The Role
Design and implement RDMA/InfiniBand networking primitives for distributed GPU inference, optimize communication for Disaggregated KV Cache and WideEP, enable sub-10s model startup, validate hardware performance on H100/H200/B200/B300/GB300 clusters, build observability tools, and optimize or author communication kernels (NCCL/NVSHMEM/UCX).
Summary Generated by Built In

ABOUT BASETEN

Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.

At Baseten, we are building the global operating system for distributed, heterogeneous AI hardware. We believe that as LLM and multi-modal workloads scale, the network is the computer. We are looking for foundational engineers to lead our GPU Networking efforts, making RDMA a first-class building block in our infrastructure and unlocking the next generation of distributed inference optimizations.

THE OPPORTUNITY

Networking and compute are no longer separate disciplines; they are converging. The massive throughput of H100, B200, and NVL72 architectures enables and demands a new approach where communication is co-optimized alongside computation. We are entering an era where the network is an active accelerator, leveraging smart hardware offloads and direct interconnects to ensure that data movement operates at wire-speed.

In this role, you will go beyond network configuration to architect the software fabric that unifies thousands of GPUs into a cohesive operating system. While you will leverage the best of the open-source ecosystem, you won't be limited by it. Where off-the-shelf solutions stop, you will build from scratch, engineering the primitives required to co-optimize communication and compute for Disaggregated Serving, Wide Expert Parallelism (WideEP), and lightening cold starts.

WHAT YOU'LL DO

  • Make RDMA First-Class: You will work on integrating RDMA/RoCE/InfiniBand capabilities directly into our inference stack, helping us move beyond TCP/IP to unlock order-of-magnitude improvements in bandwidth and latency.

  • Optimize Distributed Inference: You will implement and tune the networking layers necessary for efficient Disaggregated KV Cache Offload and WideEP, ensuring seamless communication across NVLink and InfiniBand for our MoE models.

  • Enable Serverless-Grade Startup Speeds for LLMs: You will work deeply with checkpointing and storage mechanisms to enable sub-10-second startup for trillion-parameter models.

  • Deep-Dive into Hardware: You will characterize and validate networking performance on bleeding-edge clusters (H100/H200, B200/B300, GB200/300 NVL72), writing the acceptance tests that ensure our hardware delivers peak achievable throughput and minimal latency.

  • Build Observability: You will design the tools that let us visualize packet flow, congestion, and effective bandwidth across the GPU interconnects, helping us diagnose complex distributed system behaviors.

  • Optimize Kernels: You will work with communication libraries (NCCL, NVSHMEM) and potentially write custom communication kernels to overlap compute and data transfer.

WHO YOU ARE

  • You have deep experience with high-performance networking protocols (InfiniBand, RoCE v2) and understand the physics of data movement.

  • You are fluent in C++ or Python, with the ability to bridge the gap between high-level logic and hardware. You have a deep understanding of the memory hierarchy in modern NVIDIA architectures (H100/Blackwell) and know how to optimize for it.

  • You like going deep. You aren't afraid to dive into TensorRT-LLM source code, write custom C++ / Python bindings, or debug NVLink topology issues.

  • You know when to use an off-the-shelf solution and when we need to build a custom solution because the upstream tools (like standard Kubernetes networking) are too slow for our needs.

HIGHLY PREFERRED:

  • Deep knowledge of NCCL, NVSHMEM, and UCX.

  • Experience with GPUDirect Storage (GDS) or high-performance filesystems like Weka or 3FS.

  • Familiarity with TensorRT-LLM, vLLM, or Sglang.

  • Experience running low-level benchmarks to "qualify" new hardware clusters.

Why join the Model Performance team?

  • Bleeding Edge Hardware: We are preparing to bring Blackwell (B200/B300) and then Rubin architectures online. You will be one of the first engineers in the industry optimizing networking for NVL72/GB300 racks.

  • We go deep: We operate at every depth. Whether it’s tuning hardware interconnects, writing custom communication kernels, or designing distributed inference strategies, we work across the entire stack to deliver performance that goes far and beyond.

  • High Impact: The networking optimizations you build will directly enable features that no one else in the industry has fully mastered yet, like seamless multi-node WideEP and instant model hydration.

BENEFITS

  • Competitive compensation, including meaningful equity.

  • 100% coverage of medical, dental, and vision insurance for employee and dependents

  • Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)

  • Paid parental leave

  • Company-facilitated 401(k)

  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.

Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.

At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.

We are an Equal Opportunity Employer and will consider qualified applicants with criminal histories in a manner consistent with applicable law (by example, the requirements of the San Francisco Fair Chance Ordinance, where applicable).

Top Skills

3Fs
C++
Gpudirect Storage (Gds)
Infiniband
Nccl
Nvlink
Nvshmem
Python
Rdma
Roce V2
Sglang
Tcp/Ip
Tensorrt-Llm
Ucx
Vllm
Weka
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
59 Employees

What We Do

At Baseten we provide all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. Get started in minutes, and avoid getting tangled in complex deployment processes. You can deploy best-in-class open-source models and take advantage of optimized serving for your own models. We also utilize horizontally scalable services that take you from prototype to production, with light-speed inference on infra that autoscales with your traffic. Best in class doesn't mean breaking the bank. Run your models on the best infrastructure without running up costs by taking advantage of our scaled-to-zero feature

Similar Jobs

Tempus AI Logo Tempus AI

Scientist

Artificial Intelligence • Big Data • Healthtech • Machine Learning • Analytics • Biotech • Generative AI
Remote or Hybrid
3 Locations
3775 Employees
200K-260K Annually

Tempus AI Logo Tempus AI

Senior Data Modeler I

Artificial Intelligence • Big Data • Healthtech • Machine Learning • Analytics • Biotech • Generative AI
Hybrid
4 Locations
3775 Employees
90K-130K Annually

MongoDB Logo MongoDB

Senior Product Manager

Big Data • Cloud • Software • Database
Easy Apply
Remote or Hybrid
2 Locations
5550 Employees
118K-231K Annually

SoFi Logo SoFi

Business Continuity 2LOD Oversight Manager

Fintech • Mobile • Software • Financial Services
Easy Apply
Remote or Hybrid
United States
4500 Employees

Similar Companies Hiring

Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees
Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account