Principal Software Engineer - Rack Scale Systems Infrastructure

Posted 12 Hours Ago
Be an Early Applicant
4 Locations
In-Office or Remote
272K-431K Annually
Expert/Leader
Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
The Role
As a Principal Software Engineer, you'll develop software systems for NVIDIA's rack-scale infrastructure, defining software architecture, collaborating across teams, mentoring engineers, and ensuring high-quality technical decisions in complex environments.
Summary Generated by Built In

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.

At NVIDIA, as a Principal Rack Scale Systems Infrastructure Engineer, you will build and guide the development of software systems. These systems support our upcoming rack-scale infrastructure products and services. This exceptional role sits where software meets hardware. You will work on control planes, state machines, orchestration systems, firmware, OS lifecycle, and networking fabrics. Your task is to compose infrastructure-as-a-service control plane software that converts complex rack-scale hardware into dependable, manageable, and programmable infrastructure for NVIDIA, partners, and leading cloud and enterprise clients globally.

What You Will Be Doing:

  • Define the complete software architecture for rack-scale infrastructure products and services, covering control plane services, infrastructure management, firmware, operating systems, kernel drivers, networking fabrics, accelerator software, and user-mode manageability software.

  • Use Kubernetes and cloud-native primitives as an infrastructure fabric when appropriate. This includes controllers, operators, reconciliation loops, and open source components. These components can operate safely at rack and fleet scale. Build open source infrastructure software that can be embraced in different forms, including libraries, services, controllers, operators, and integration APIs for internal deployments and CSP environments.

  • Bridge hardware and software teams across firmware, BMC, BIOS, boot flows, OS images, drivers, networking, NVLink domains, InfiniBand, GPUs, DPUs, CPUs, and system management interfaces. Translate forward-looking infrastructure roadmaps into formal software requirements, architecture specifications, and execution plans that align teams across the organization.

  • Partner directly with hyperscalers, CSPs, enterprise customers, internal component leads, vendors, and business partners to align infrastructure capabilities with real-world deployment and integration needs. Establish reliability, security, validation, and left-shift strategies that reduce risk before hardware reaches production environments.

  • Mentor senior engineers and technical leads, raising the engineering bar for large-scale networked systems, foundational software, and rack-scale control plane development.

  • Make high-quality technical decisions in ambiguous environments, balancing customer needs, schedule, hardware realities, software maintainability, open source adoption, and long-term infrastructure evolution.

What We Need To See:

  • BS or MS in Computer Engineering, Computer Science, Electrical Engineering, or a related field, or equivalent experience. Proven experience (15+ years) in systems architecture, system software, distributed systems, infrastructure control planes, or infrastructure engineering.

  • Solid architectural knowledge of coordination frameworks, state machines, declarative APIs, reconciliation loops, lifecycle orchestration, failure handling, upgrade and rollback workflows, and distributed systems tradeoffs.

  • Practical coding skills in Go, C++, or Rust, encompassing the capability to write, review, and direct production-quality infrastructure software. Experience with Rust is highly valued.

  • Experience with Kubernetes or similar orchestration systems, especially as a fabric for managing infrastructure, hardware resources, or large-scale infrastructure services. Experience with Linux-based infrastructure software, OS rollout and image management, kernel or driver interactions, firmware lifecycle, and hardware bring-up workflows.

  • Strong understanding of data center networking technologies and protocols, such as Ethernet, InfiniBand, RDMA, and fabric-level manageability. Experience with complex accelerator-based systems, including GPUs, DPUs, FPGAs, custom silicon, or other high-performance computing systems.

  • Expertise in in-band and out-of-band management architectures, including BMCs, Redfish, IPMI, and related system management protocols. Ability to work with security experts to define practical tradeoffs across secure boot, attestation, access control, update safety, serviceability, and ease of operation.

  • Experience crafting software intended for open source release, including API stability, modularity, documentation, community usability, and clean separation between shared software and deployment-specific integrations.

  • Experience using AI-assisted development tools responsibly as an engineering multiplier for coding, test generation, debugging, build iteration, and documentation.

  • Established skill in specifying requirements, guiding architecture, and managing delivery across various engineering teams and organizations. Strong written and verbal communication skills, enabling clear explanation of complex hardware/software tradeoffs to engineering leaders, customers, partners, and executives.

Ways To Stand Out from the crowd:

  • Built software supporting multiple adoption models — internal services, CSP-integrated offerings, reusable libraries, and customer-extensible APIs. Strong Rust skills in systems, infrastructure, or hardware-adjacent software.

  • Multiplied team impact through reference implementations, design reviews, shared libraries, architecture docs, dev workflows, and AI-assisted engineering. Hands-on with fleet-scale provisioning, updates, rollback, observability, health, and remediation.

  • Led across the full data center product lifecycle: inception, pre- and post-silicon, manufacturing, deployment, and operations. Familiar with open source ecosystems, contribution models, and balancing community collaboration with product needs.

  • Deep experience with rack- or cluster-scale systems spanning compute, networking, storage, accelerators, firmware, and infra management as one operational domain. Skilled at finding simple, durable abstractions in complex systems to align teams, customers, and long-term direction.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 431,250 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until May 19, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Skills Required

  • Proven experience in systems architecture, system software, distributed systems, or infrastructure engineering with 15+ years in the field.
  • Solid architectural knowledge of coordination frameworks and distributed systems tradeoffs.
  • Practical coding skills in Go, C++, or Rust.
  • Experience with Kubernetes or similar orchestration systems.
  • Strong understanding of data center networking technologies and protocols.
  • Expertise in management architectures, including BMCs and related protocols.
  • Experience crafting software for open source release.
  • Strong communication skills to explain hardware/software tradeoffs.

NVIDIA Compensation & Benefits Highlights

The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about NVIDIA and has not been reviewed or approved by NVIDIA.

  • Equity Value & Accessibility Equity awards and a discounted ESPP are highlighted as core parts of total compensation, enabling employees to share in the company’s success. Stock-based compensation and the two-year lookback ESPP are consistently described as especially valuable.
  • Healthcare Strength Health coverage is portrayed as robust, with comprehensive medical, dental, and vision options alongside mental health support and on-site care resources. Employer HSA contributions and wellness perks reinforce the depth of the offering.
  • Retirement Support Retirement programs are depicted as strong, featuring a meaningful 401(k) match with Roth options and support for Mega Backdoor Roth contributions. These elements position long-term savings as a notable advantage of the total rewards package.

NVIDIA Insights

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Santa Clara, CA
21,960 Employees
Year Founded: 1993

What We Do

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”

Similar Jobs

Augury Logo Augury

Product Marketing Manager

Artificial Intelligence • Hardware • Internet of Things • Machine Learning • Software • Manufacturing
Easy Apply
Remote
United States
203 Employees
140K-165K Annually

Navan Logo Navan

Associate Lodging Manager & Marketing Specialist

Fintech • Information Technology • Payments • Productivity • Software • Travel • Automation
Easy Apply
Remote or Hybrid
USA
3300 Employees
70K-156K Annually

Skillsoft Logo Skillsoft

Financial Analyst

Artificial Intelligence • Consumer Web • Edtech • HR Tech • Information Technology • Software • Conversational AI
Remote
United States
2900 Employees
17-20 Hourly

Superhuman Logo Superhuman

Manager, Academy

Artificial Intelligence • Information Technology • Machine Learning • Natural Language Processing • Productivity • Software • Generative AI
Remote or Hybrid
2 Locations
1500 Employees
150K-214K Annually

Similar Companies Hiring

Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account