Senior System Software Engineer - GPU Performance

Reposted 3 Days Ago
Be an Early Applicant
Hiring Remotely in Santa Clara, CA, USA
In-Office or Remote
148K-288K Annually
Mid level
Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
The Role
Conduct performance analysis on large multi-GPU clusters, evaluate interactions between libraries and hardware, and develop tools for performance visualization and data analysis.
Summary Generated by Built In

NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars.

We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver libraries like NCCL, NVSHMEM, UCX for Deep Learning and HPC. We are looking for a motivated Performance engineer to influence the roadmap of our communication libraries. The DL and HPC applications of today have a huge compute demand and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (eg. NVLink, PCIe) within a node and with high-speed networking (eg. Infiniband, Ethernet) across the nodes. Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! This is an outstanding opportunity for someone with HPC and performance background to advance the state of the art in this space. Are you ready for to contribute to the development of innovative technologies and help realize NVIDIA's vision?

What you will be doing:
  • Conduct in-depth performance characterization and analysis on large multi-GPU and multi-node clusters.

  • Study the interaction of our libraries with all HW (GPU, CPU, Networking) and SW components in the stack

  • Evaluate proof-of-concepts, conduct trade-off analysis when multiple solutions are available

  • Triage and root-cause performance issues reported by our customers

  • Collect a lot of performance data; build tools and infrastructure to visualize and analyze the information

  • Collaborate with a very dynamic team across multiple time zones

What we need to see:
  • M.S. (or equivalent experience) or PhD in Computer Science, or related field with relevant performance engineering and HPC experience

  • 3+ yrs of experience with parallel programming and at least one communication runtime (MPI, NCCL, UCX, NVSHMEM)

  • Experience conducting performance benchmarking and triage on large scale HPC clusters

  • Good understanding of computer system architecture, HW-SW interactions and operating systems principles (aka systems software fundamentals)

  • Implement micro-benchmarks in C/C++, read and modify the code base when required

  • Ability to debug performance issues across the entire HW/SW stack. Proficient in a scripting language, preferably Python

  • Familiar with containers, cloud provisioning and scheduling tools (Kubernetes, SLURM, Ansible, Docker)

  • Adaptability and passion to learn new areas and tools. Flexibility to work and communicate effectively across different teams and timezones

Ways to stand out from the crowd:
  • Practical experience with Infiniband/Ethernet networks in areas like RDMA, topologies, congestion control

  • Experience debugging network issues in large scale deployments

  • Familiarity with CUDA programming and/or GPUs

  • Experience with Deep Learning Frameworks such PyTorch, TensorFlow

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until May 7, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Skills Required

  • M.S. or PhD in Computer Science or related field
  • 3+ years of experience with parallel programming and at least one communication runtime
  • Experience conducting performance benchmarking on large scale HPC clusters
  • Good understanding of computer system architecture and operating systems principles
  • Ability to debug performance issues across the entire HW/SW stack
  • Proficient in a scripting language, preferably Python
  • Familiar with containers, cloud provisioning and scheduling tools

NVIDIA Compensation & Benefits Highlights

The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about NVIDIA and has not been reviewed or approved by NVIDIA.

  • Equity Value & Accessibility Equity awards and a discounted ESPP are highlighted as core parts of total compensation, enabling employees to share in the company’s success. Stock-based compensation and the two-year lookback ESPP are consistently described as especially valuable.
  • Healthcare Strength Health coverage is portrayed as robust, with comprehensive medical, dental, and vision options alongside mental health support and on-site care resources. Employer HSA contributions and wellness perks reinforce the depth of the offering.
  • Retirement Support Retirement programs are depicted as strong, featuring a meaningful 401(k) match with Roth options and support for Mega Backdoor Roth contributions. These elements position long-term savings as a notable advantage of the total rewards package.

NVIDIA Insights

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Santa Clara, CA
21,960 Employees
Year Founded: 1993

What We Do

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”

Similar Jobs

Rula Logo Rula

Engineering Manager

Healthtech • Other • Social Impact • Software • Telehealth
Remote
United States
595 Employees
207K-243K Annually

Unanet Logo Unanet

Senior Solutions Architect

Enterprise Web • Fintech • Marketing Tech • Software
Remote
United States
430 Employees
136K-160K Annually

Affirm Logo Affirm

Manager, Machine Learning Engineering (Underwriting)

Big Data • Fintech • Mobile • Payments • Financial Services
Easy Apply
Remote
United States
2200 Employees
200K-275K Annually

Adswerve, Inc. Logo Adswerve, Inc.

Client Success Director, Growth Lead Marketer

AdTech • Artificial Intelligence • Cloud • Digital Media • Marketing Tech • Analytics • Consulting
Easy Apply
Remote
United States
250 Employees
115K-135K Annually

Similar Companies Hiring

Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account