We are looking for a software engineer for our Sparse Linear Algebra team which develops key technologies and libraries such as cuDSS, cuSPARSE and cuSPARSELt, for scientific computing and deep learning software stacks targeting a range of processors from edge computing to supercomputers. These high-performance libraries provide accelerated linear algebra functions, such as: matrix products, direct and iterative solvers; and are consumed by industrial and research organizations with applications ranging from gaming and Machine Learning to autonomous driving and chip modeling. Does the idea of being at the heart of these projects and applying your knowledge to develop and optimize algorithms which make an impact around the world excite you? If yes, then come and join our team!
What you will be doing:
-
developing and optimizing scalable high-performance numerical sparse linear algebra software such as direct and iterative sparse solvers,
-
providing technical leadership and guidance to library engineers working with you on projects,
-
working closely with product management and other internal and external partners to understand feature and performance requirements and contribute to the technical roadmaps of libraries,
-
finding opportunities to improve library performance and reduce code maintenance overhead through re-architecting.
These issues are by nature complex and will require you to find and explain solutions, exercise leadership, and coordinate with multiple teams to achieve your objectives.
What we need to see:
-
PhD or MSc’s degree in Computational Science, Computer Science, Applied Math, or related science or engineering field of study is preferred (or equivalent experience).
-
5+ years experience developing, debugging, and optimizing high-performance parallel numerical sparse linear algebra applications on modern computing platforms, preferably with GPU acceleration using CUDA.
-
Excellent C++ programming and software design skills, including functional and performance tests design.
-
Deep understanding of numerical methods, especially, sparse linear algebra algorithms (e.g., multi-frontal factorization, algebraic multigrid (AMG), etc.)
-
Proven experience in leading and completing software development projects.
-
Excellent collaboration, communication, and documentation habits.
Ways to stand out from the crowd:
-
Experience developing libraries consumed by many users.
-
Experience developing distributed memory parallel computing software with MPI or a PGAS library (e.g., NVSHMEM).
-
Good knowledge of compute and network hardware (e.g., Infiniband) architecture.
-
Experience working in an agile software development environment.
-
A scripting language, preferably Python.
NVIDIA’s invention of the GPU in 1999 fueled the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing for science and engineering. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company.” We're looking to grow our company and build our teams with the smartest people in the world.
#LI-Hybrid
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Top Skills
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”