Help design and ship an Always-On, low-overhead GPU profiling service that runs in production, scales across cluster environments, and delivers actionable insights for ML workloads. You will lead the architecture and hands-on delivery across system software, drivers, and CUDA to make profiling continuously available and reliable.
What you’ll be doing:
Design the architecture for an Always-On profiling service, defining interfaces, data flows, and scalability guarantees for multi-process/GPU/node systems.
Drive low-overhead, high-reliability implementations in C/C++, including IPC/shared memory, and bounded CPU/memory budgets.
Lead end-to-end feature delivery spanning user-mode components, driver/platform layers, and performance counter/trace providers.
Establish profiling models that integrate with existing ML/AI workflows (e.g., PyTorch/XLA) to turn low-level signals into actionable insights.
Set technical direction for an engineering team; mentor engineers, drive technical planning to mitigate architectural risks, and align roadmaps across internal and external partners.
What we need to see:
BS or MS degree or equivalent experience in Computer Engineering, Computer Science, or related degree.
15+ years of system-level C/C++ development, including concurrency, memory management, and performance engineering.
Proven experience designing and shipping production quality system software or drivers with strict reliability, observability, and performance constraints.
Demonstrated technical leadership: defining architecture and success metrics, and translating abstract product visions into actionable technical roadmaps with fast-paced, multidisciplinary teams.
Strong interpersonal, verbal, and written communication; able to influence across organizations and build trust with external collaborators.
Ways to stand out from the crowd:
Extensive experience with profiling/tracing stacks for CPU/GPU (e.g., CUPTI, Nsight, performance counters, event correlation) and debugging highly concurrent systems.
Deep hands-on knowledge of CUDA and GPU architecture, including runtime/driver APIs, CUDA streams/graphs, and kernel behavior.
Track record building continuous, always-on, or multi-client profiling systems designed for predictable overhead at scale.
Hands-on experience tuning ML training/inference loops based on deep profiling analysis, with familiarity in ML ecosystems (e.g., PyTorch, JAX) and correlating application events with GPU metrics to translate data into actionable performance insights (e.g., bottleneck triage, compute vs. memory bound).
Experience with user-mode driver development and integration within platform security and permissions models.
You will also be eligible for equity and benefits.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.Top Skills
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”








