NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars.
We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver libraries like NCCL, NVSHMEM, UCX for Deep Learning and HPC. We are looking for a motivated Performance engineer to influence the roadmap of our communication libraries. The DL and HPC applications of today have a huge compute demand and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (eg. NVLink, PCIe) within a node and with high-speed networking (eg. Infiniband, Ethernet) across the nodes. Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! This is an outstanding opportunity for someone with HPC and performance background to advance the state of the art in this space. Are you ready for to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What you will be doing:
-
Conduct in-depth performance characterization and analysis on large multi-GPU and multi-node clusters.
-
Study the interaction of our libraries with all HW (GPU, CPU, Networking) and SW components in the stack
-
Evaluate proof-of-concepts, conduct trade-off analysis when multiple solutions are available
-
Triage and root-cause performance issues reported by our customers
-
Collect a lot of performance data; build tools and infrastructure to visualize and analyze the information
-
Collaborate with a very dynamic team across multiple time zones
What we need to see:
-
M.S. (or equivalent experience) or PHD in Computer Science, or related field with relevant performance engineering and HPC experience
-
3+ yrs of experience with parallel programming and at least one communication runtime (MPI, NCCL, UCX, NVSHMEM)
-
Experience conducting performance benchmarking and triage on large scale HPC clusters
-
Good understanding of computer system architecture, HW-SW interactions and operating systems principles (aka systems software fundamentals)
-
Implement micro-benchmarks in C/C++, read and modify the code base when required
-
Ability to debug performance issues across the entire HW/SW stack. Proficient in a scripting language, preferably Python
-
Familiar with containers, cloud provisioning and scheduling tools (Kubernetes, SLURM, Ansible, Docker)
-
Adaptability and passion to learn new areas and tools. Flexibility to work and communicate effectively across different teams and timezones
Ways to stand out from the crowd:
-
Practical experience with Infiniband/Ethernet networks in areas like RDMA, topologies, congestion control
-
Experience debugging network issues in large scale deployments
-
Familiarity with CUDA programming and/or GPUs
-
Experience with Deep Learning Frameworks such PyTorch, TensorFlow
The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Top Skills
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”