System Software Engineer, LLM Inference and Performance Optimization

Posted 2 Days Ago
Be an Early Applicant
Santa Clara, CA
7+ Years Experience
Artificial Intelligence • Hardware • Robotics • Software • Metaverse
The Role
As a System Software Engineer, you will design and optimize inference logic for large language models, develop low-latency pipelines, collaborate with ML engineers, and conduct performance analysis on hardware platforms, enhancing AI performance.
Summary Generated by Built In

As a System Software Engineer (LLM Inference & Performance Optimization) you will be at the heart of our AI advancements. Our team is dedicated to pushing the boundaries of machine learning and optimizing large language models (LLMs) for flawless, real-time performance across diverse hardware platforms. This is your chance to contribute to world-class solutions that impact the future of technology.

What you'll be doing:

  • Design, implement, and optimize inference logic for fine-tuned LLMs, working closely with Machine Learning Engineers.

  • Develop efficient, low-latency glue logic and inference pipelines scalable across various hardware platforms, ensuring outstanding performance and minimal resource usage.

  • Apply hardware accelerators such as GPUs, and other specialized hardware to improve inference speed and effectively implement real-world applications.

  • Collaborate with cross-functional teams to integrate models seamlessly into diverse environments, meeting strict functional and performance requirements.

  • Conduct detailed performance analysis and optimization for specific hardware platforms, focusing on efficiency, latency, and power consumption.

What we need to see:

  • 8+ years of expert proficiency in C++ with a deep understanding of memory management, concurrency, and low-level optimizations.

  • M.S. or higher degree (or equivalent experience) in Computer Science/Engineering and related field.

  • Strong experience in system-level software engineering, including multi-threading, data parallelism, and performance tuning.

  • Validated expertise in LLM inference, with experience in model serving frameworks like ONNX Runtime, TensorRT.

  • Familiarity with real-time systems and performance-tuning techniques, especially for machine learning inference pipelines.

  • Ability to work collaboratively with Machine Learning Engineers and cross-functional teams to align system-level optimizations with model goals.

  • Extensive understanding of hardware architectures and the ability to bring to bear specialized hardware for optimized ML model inference.

Ways to stand out from the crowd:

  • Experience with deep learning hardware accelerators, such as Nvidia GPUs.

  • Familiarity with ONNX, TensorRT, or cuDNN for LLM inference on GPU.

  • Experience with low-latency optimizations and real-time system constraints for ML inference.

The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Top Skills

C++
The Company
HQ: Santa Clara, CA
21,960 Employees
On-site Workplace
Year Founded: 1993

What We Do

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”

Jobs at Similar Companies

bet365 Logo bet365

Trading Assistant

Digital Media • Gaming • Software • eSports • Automation
Denver, CO, USA
6100 Employees
48K-53K Annually

Jobba Trade Technologies, Inc. Logo Jobba Trade Technologies, Inc.

Customer Success Specialist

Cloud • Information Technology • Productivity • Professional Services • Software
Hybrid
Chicago, IL, USA
45 Employees

Similar Companies Hiring

TrainingPeaks (A Peaksware Company) Thumbnail
Software • Fitness
Louisville, CO
69 Employees
bet365 Thumbnail
Software • Gaming • eSports • Digital Media • Automation
Denver, Colorado
6100 Employees
Jobba Trade Technologies, Inc. Thumbnail
Software • Professional Services • Productivity • Information Technology • Cloud
Chicago, IL
45 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account