As a member of the System Software team, you'll be responsible for building profiling solutions for large-scale real world applications running on GPU compute clusters to make them work efficiently and improve the user experience for customer as well as engineers supporting the cluster. Much of our software development focuses on profiling varied set of applications running on different GPU clusters, and being able to accurately measure and display the user experience on these clusters with actionable inputs for customers and engineers supporting the cluster. Creating a fault tolerant distributed system while minimizing data loss and limiting time spent on reactive operational work is key to product quality and dynamic day-to-day work. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn and grow.
What you'll be doing:
-
Work in an agile and fast-paced global environment to gather requirements, architect, design, implement, test, deploy, release, and support large scale distributed systems infrastructure with monitoring, logging, visualization, and alerting capabilities with promised uptime
-
Build internal profiling tools for real world ML/DL applications running on HPC GPU clusters for failure and efficiency analysis to help improve current and future generation of GPU clusters and associated HWs
-
Understand state of the art improvements in ML/DL domain, and work with various application owners and research teams to add / improve profiling needs for current and potential future supported features
What we need to see:
-
BS+ in Computer Science or related (or equivalent experience) and 5+ years of software development (in Python)
-
Experience with Gitlab (or another source code management) branch/release, CI/CD pipeline, etc.
-
Solid understanding of algorithms, data structures, and runtime/space complexity
-
Experience working with distributed system software architecture
-
Basic understanding of HPC GPU cluster, slurm
-
Basic understanding of Machine learning concepts and terminologies
-
Background with databases - SQL and NoSQL (prometheus, elasticsearch, opensearch, redis, etc.)
-
Experience with distributed Data Pipeline, Telemetry, Visualizations (Kibana, Grafana, etc.), Alerting (pagerduty, etc.)
Ways to stand out from crowd:
-
Experience debugging functional and performance issues in HPC GPU clusters
-
Background in running and instrumenting distributed LLM training on a multi gpu HPC cluster
-
Knowledge of LLM training features and libraries - Checkpointing, Parallelism, Pytorch, Megatron-LM, NCCL
-
Experience with HPC schedulers such as Slurm
-
Background with Opentelemetry
The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Top Skills
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”