We are now looking for a Senior System Software Engineer to work on Dynamo. NVIDIA is hiring software engineers for its GPU-accelerated deep learning software team. Academic and commercial groups around the world are using GPUs to power a revolution in AI, enabling breakthroughs in problems from image classification to speech recognition to natural language processing. We are a fast-paced team building Generative AI inference platform to make design and deployment of new AI models easier and accessible to all users.
What you'll be doing:
In this role, you will develop open source software to serve inference of trained AI models running on GPUs. You will
Contribute to the development of disaggregated serving for Dynamo-supported inference engines (vLLM, SGLang, TRT-LLM) and expand to support multi-modal models for embedding disaggregation.
Innovate in the management and transfer of large KV caches across heterogeneous memory and storage hierarchies, using the NVIDIA Optimized Transfer Library (NIXL) for low-latency, cost-effective data movement.
Build new features to the Dynamo Rust Runtime Core Library and design, implement, and optimize distributed inference components in Rust and Python.
Balance a variety of objectives: build robust, scalable, high performance software components to support our distributed inference workloads; work with team leads to prioritize features and capabilities; load-balance asynchronous requests across available resources; optimize prediction throughput under latency constraints; and integrate the latest open source technology.
What we need to see:
Masters or PhD or equivalent experience
3+ years in Computer Science, Computer Engineering, or related field
Ability to work in a fast-paced, agile team environment
Excellent Rust/Python/C++ programming and software design skills, including debugging, performance analysis, and test design.
Experience with high scale distributed systems and ML systems
Ways to stand out from the crowd:
Prior contributions to open-source AI inference frameworks (e.g., vLLM, TensorRT-LLM, SGLang).
Experience with GPU memory management, cache management, or high-performance networking.
Understanding of LLM-specific inference challenges, such as context window scaling and multi-model agentic and reasoning workflows.
Prior experience with disaggregated serving and multi modal models (Vision-Language models, Audio Language Models, Video Language Models)
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most expert and passionate people in the world working for us. Are you creative and autonomous? Do you love a challenge? If so, we want to hear from you. Come help us build the real-time, efficient computing platform driving our success in the multifaceted and quickly growing field Deep Learning and Artificial Intelligence.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.You will also be eligible for equity and benefits.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.Top Skills
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”






