NVIDIA's invention of the GPU 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company”.
In this role you will work closely with deep learning compiler engineers to build the infrastructure and automation that powers day-to-day development and releases. Responsibilities include designing and maintaining sophisticated CI/CD systems that run ML workloads at scale across diverse GPU environments, produce actionable signals for compiler developers, testers, and release engineers, and continuously improve stability and turnaround time. This includes building performance-aware pipelines and workload harnesses that support release confidence and long-term quality of deep learning compiler stacks.
What you’ll be doing:
Drive CI and infrastructure capabilities that make deep learning compiler development fast, reliable, and scalable. This includes improving signal-to-noise (flake reduction, reproducibility, and richer diagnostics), accelerating iteration cycles, scaling capacity and coverage across models/hardware/software configurations, and building strong observability (metrics, logging, tracing, dashboards) so failures are easy to understand and fix.
Explore practical uses of AI to enhance CI workflows—such as smarter test selection, automated triage/summarization, and faster issue isolation—ultimately increasing the quality and speed of deep learning compiler development, testing, and release.
What we need to see:
BS, MS, or PhD (or equivalent experience) in Computer Science, Computer/Electrical Engineering, Mathematics, or related field
3+ years of professional experience designing and scaling CI/CD, build/release, or developer productivity infrastructure for DL/GPU software environments
Strong software engineering skills (Python required) with ability to architect, implement, and debug complex systems end-to-end
Hands-on experience building CI/MLOps platform capabilities—pipeline orchestration, artifact/package management, and production-grade observability (logs/metrics/dashboards)—with strong reliability and maintainability
Experience with deep learning frameworks/runtime stacks (e.g., PyTorch, JAX, vLLM, SGLang, TensorRT, NeMo) and running real workloads in production-like environments
Working knowledge of Linux-based development and debugging across complex software/hardware stacks (drivers, CUDA libraries, containers, cluster schedulers, etc.)
Ways to stand out from the crowd:
Experience applying AI/LLMs and agent-based workflows to improve CI and infrastructure (e.g., smarter triage/routing, automated failure summarization, intelligent test selection, regression isolation, or developer-assist tooling)
Experience with compiler-focused verification techniques (e.g., differential testing across backends/versions, IR-level checks, automated reduction/minimization, fuzzing/property-based testing, or translation-validation style approaches)
Compiler-adjacent knowledge, including familiarity with LLVM/MLIR-based toolchains and the ability to debug issues that span compilation/codegen, runtime execution, and hardware/software boundaries
With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 140,000 USD - 224,250 USD.You will also be eligible for equity and benefits.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.NVIDIA Compensation & Benefits Highlights
The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about NVIDIA and has not been reviewed or approved by NVIDIA.
-
Equity Value & Accessibility — Equity awards and a discounted ESPP are highlighted as core parts of total compensation, enabling employees to share in the company’s success. Stock-based compensation and the two-year lookback ESPP are consistently described as especially valuable.
-
Healthcare Strength — Health coverage is portrayed as robust, with comprehensive medical, dental, and vision options alongside mental health support and on-site care resources. Employer HSA contributions and wellness perks reinforce the depth of the offering.
-
Retirement Support — Retirement programs are depicted as strong, featuring a meaningful 401(k) match with Roth options and support for Mega Backdoor Roth contributions. These elements position long-term savings as a notable advantage of the total rewards package.
NVIDIA Insights
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”









