Machine Learning Applications and Compiler Engineer, LPX - New College Grad 2026

Posted 2 Days Ago
Santa Clara, CA, USA
In-Office
124K-242K Annually
Entry level
Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
The Role
Develop algorithms and optimizations for NVIDIA's LPX inference and compiler stack focusing on performance, efficiency, and integration with hardware systems.
Summary Generated by Built In

Our work at NVIDIA is dedicated towards a computing model focused on visual and AI computing. For two decades, NVIDIA has pioneered visual computing, the art and science of computer graphics, with our invention of the GPU. The GPU has also shown to be spectacularly effective at solving some of the most complex problems in computer science. Today, NVIDIA’s GPU simulates human intelligence, running deep learning algorithms and acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. We are looking to grow our company and teams with the smartest people in the world and there has never been a more exciting time to join our team! 

 

NVIDIA is seeking engineers to develop algorithms and optimizations for our LPX inference and compiler stack. You will work at the intersection of large-scale systems, compilers, and deep learning, crafting how neural network workloads map onto future NVIDIA platforms. This is your chance to be part of something outstandingly innovative!

 

What you’ll be doing:

  • Build, develop, and maintain high-performance runtime and compiler components, focusing on end-to-end inference optimization.

  • Define and implement mappings of large-scale inference workloads onto NVIDIA’s systems.

  • Extend and integrate with NVIDIA’s SW ecosystem, contributing to libraries, tooling, and interfaces that enable seamless deployment of models across platforms.

  • Benchmark, profile, and monitor key performance and efficiency metrics to ensure the compiler generates efficient mappings of neural network graphs to our inference hardware.

  • Collaborate closely with hardware architects and design teams to feedback software observations, influence future architectures, and codesign features that unlock new performance and efficiency points.

  • Prototype and evaluate new compilation and runtime techniques, including graph transformations, scheduling strategies, and memory/layout optimizations tailored to spatial processors.

  • Publish and present technical work on novel compilation approaches for inference and related spatial accelerators at top tier ML, compiler, and computer architecture venues.

 

What we need to see:

  • Pursuing or recently completed a MS or PhD in Computer Science, Electrical/Computer Engineering, or related field, or equivalent experience.

  • Possess software engineering background with familiarity in systems level programming (e.g., C/C++ and/or Rust) and solid CS fundamentals in data structures, algorithms, and concurrency.

  • Hands on experience with compiler or runtime development, including IR design, optimization passes, or code generation.

  • Experience with LLVM and/or MLIR, including building custom passes, dialects, or integrations.

  • Familiarity with deep learning frameworks such as TensorFlow and PyTorch, and experience working with portable graph formats such as ONNX.

  • Understanding of parallel and heterogeneous compute architectures, such as GPUs, spatial accelerators, or other domain specific processors.

  • Strong analytical and debugging skills, with experience using profiling, tracing, and benchmarking tools to drive performance improvements.

  • Excellent communication and collaboration skills, with the ability to work across hardware, systems, and software teams.

  • Ideal candidates will have direct experience with MLIR based compilers or other multilevel IR stacks, especially in the context of graph based deep learning workloads.

 

Ways to stand out from the crowd:

  • Prior work on spatial or dataflow architectures, including static scheduling, pipeline parallelism, or tensor parallelism at scale.

  • Contributions to opensource ML frameworks, compilers, or runtime systems, particularly in areas related to performance or scalability.

  • Demonstrated research impact, such as publications or presentations at conferences like PLDI, CGO, ASPLOS, ISCA, MICRO, MLSys, NeurIPS, or similar.

  • Experience with large-scale AI distributed inference or training systems, including performance modeling and capacity planning for multi rack deployments.

 

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you! 

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 124,000 USD - 195,500 USD for Level 2, and 152,000 USD - 241,500 USD for Level 3.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until May 9, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

NVIDIA Compensation & Benefits Highlights

The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about NVIDIA and has not been reviewed or approved by NVIDIA.

  • Equity Value & Accessibility Equity awards and a discounted ESPP are highlighted as core parts of total compensation, enabling employees to share in the company’s success. Stock-based compensation and the two-year lookback ESPP are consistently described as especially valuable.
  • Healthcare Strength Health coverage is portrayed as robust, with comprehensive medical, dental, and vision options alongside mental health support and on-site care resources. Employer HSA contributions and wellness perks reinforce the depth of the offering.
  • Retirement Support Retirement programs are depicted as strong, featuring a meaningful 401(k) match with Roth options and support for Mega Backdoor Roth contributions. These elements position long-term savings as a notable advantage of the total rewards package.

NVIDIA Insights

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Santa Clara, CA
21,960 Employees
Year Founded: 1993

What We Do

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”

Similar Jobs

MetLife Logo MetLife

Customer Care Advocate Disability Service Omaha 6.1.26

Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Remote or Hybrid
United States
43000 Employees
20-20 Hourly

MetLife Logo MetLife

Director, Sr. Relationship Manager - Agricultural Investments, Food and Agribusiness Southeast Region Office

Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Remote or Hybrid
United States
43000 Employees
155K-190K Annually

Zscaler Logo Zscaler

Operations Specialist

Cloud • Information Technology • Security • Software • Cybersecurity
Easy Apply
Remote or Hybrid
San Jose, CA, USA
8697 Employees
105K-150K Annually

Zscaler Logo Zscaler

Staff Software Engineer

Cloud • Information Technology • Security • Software • Cybersecurity
Easy Apply
Hybrid
San Jose, CA, USA
8697 Employees
154K-220K Annually

Similar Companies Hiring

Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account