Become a Senior System Software Engineer on NVIDIA's AI Inference Operations Team, focusing on DevOps and Infrastructure Automation. Join a company revolutionizing computer graphics, PC gaming, and accelerated computing. You will be working alongside a team of passionate and skilled engineers who are continuously building better tools to deploy and manage this infrastructure. With your help, we will forge the next generation of compute infrastructure. If you thrive at the intersection of systems programming, cloud-native infrastructure, and developer productivity, this is your opportunity to make a lasting impact at a leading technology company.
What you'll be doing:
Design, build, and operate the infrastructure backbone powering AI inference products — reliable, performant, and scalable at every layer!
Own Kubernetes deployments end-to-end across cloud and on-prem: runbooks, canary checks, post-deploy validation, and rollbacks when needed.
Architect CI/CD pipelines for automated build, test, packaging, and release of inference libraries and their container-based software stacks.
Build observability that actually tells the truth about platform health — dashboards, logs, metrics, automated checks — and lead first-level incident triage with clean, actionable handoffs to engineering.
Manage cloud and on-prem environments with infrastructure-as-code (Terraform, Ansible, Helm, Crossplane), and chip away at toil using GitHub Actions, GitLab CI, and custom tooling.
Own the security posture for infrastructure components: vulnerability scans, CVE remediation, and compliance with internal policies.
Collaborate closely with deep learning framework engineers, compiler teams, and platform architects to streamline end-to-end deployment!
What we need to see:
BS/MS in CS/CE or equivalent experience, plus 7+ years operating production distributed systems (SRE / DevOps / Platform Ops).
Deep Kubernetes expertise — components, subsystems, on-prem setup, and hands-on debugging of telemetry-heavy microservices across AWS, Azure, GCP, and on-prem.
Strong CI/CD chops (GitLab CI, GitHub Actions), Git-based workflows, Linux systems programming, and scripting in Python and Bash.
IaC fluency (Terraform, Ansible, Helm, Crossplane) and containerization depth (Docker, containerd, OCI).
Proven reliability ownership — SLOs/SLIs, on-call, incident response, and post-incident reviews that drive measurable improvements — backed by hands-on experience with observability stacks like Prometheus, Grafana, and Loki.
A clear communicator who writes runbooks people actually use!
Ways to stand out from the crowd:
MLOps experience — crafting, deploying, and operating machine learning pipelines end to end.
Experience in open-source development workflows and community engagement on projects like Triton Inference Server or ONNX Runtime.
Familiarity with GPU software stacks — CUDA, cuDNN, TensorRT, and inference serving frameworks.
Experience building custom test automation frameworks and using data-driven metrics to improve platform health and developer efficiency.
Demonstrated ability to debug complex issues spanning kernel modules, container runtimes, and distributed networking.
You will also be eligible for equity and benefits.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.Skills Required
- 7+ years operating production distributed systems (SRE / DevOps / Platform Ops)
- Deep Kubernetes expertise
- Strong CI/CD knowledge (GitLab CI, GitHub Actions)
- IaC fluency (Terraform, Ansible, Helm, Crossplane)
- Reliability ownership experience with observability stacks (Prometheus, Grafana, Loki)
NVIDIA Compensation & Benefits Highlights
The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about NVIDIA and has not been reviewed or approved by NVIDIA.
-
Equity Value & Accessibility — Equity awards and a discounted ESPP are highlighted as core parts of total compensation, enabling employees to share in the company’s success. Stock-based compensation and the two-year lookback ESPP are consistently described as especially valuable.
-
Healthcare Strength — Health coverage is portrayed as robust, with comprehensive medical, dental, and vision options alongside mental health support and on-site care resources. Employer HSA contributions and wellness perks reinforce the depth of the offering.
-
Retirement Support — Retirement programs are depicted as strong, featuring a meaningful 401(k) match with Roth options and support for Mega Backdoor Roth contributions. These elements position long-term savings as a notable advantage of the total rewards package.
NVIDIA Insights
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”







