DL Communications Collectives SW Engineer

Posted 5 Days Ago
Be an Early Applicant
32 Locations
Remote
Hybrid
3-5 Years Experience
Software
The Role
Seeking a Software Engineer to work on optimizing communication collectives libraries for distributed systems in the AI ecosystem. Responsibilities include designing and implementing communication components, profiling and tuning AI applications' communications, and collaborating with hardware and software teams. Requires strong GPU architecture understanding, experience in parallel and distributed algorithms, and proficiency in communication collectives libraries like UCC and NCCL.
Summary Generated by Built In

We are working on software to improve the Deep Learning ecosystem and help hardware engineers build great Deep Learning parallel systems.

We are looking for a strong candidate with a background in writing systems software for networking devices (and optionally Linux kernel networking stack or network drivers). Someone who's implemented network protocols or has worked on OpenMPI.This role involves designing and implementing highly optimized communication collectives libraries similar to UCC (Unified Collective Communication) and NCCL (NVIDIA Collective Communications Library). The ideal candidate will work closely with hardware and software teams to ensure efficient data communication and synchronization across multiple AI accelerators in a distributed system, enabling scalable deep learning and high-performance computing applications.

You will be learning technical and organizational skills from industry veterans: how to write performant and readable code; how to structure and communicate projects, ideas, and progress; how to work effectively with the Open Source community.

We are big proponents of Open Source and Free software and contribute back our improvements to all the great projects we use.


We prefer candidates who work out of one of our offices, but will consider remote candidates as well.

Responsibilities

  • Build-up communication components of an AI Software Stack
  • Port AI Software to run on a new H/W platform
  • Profiling and tuning of communications within AI applications
  • Design, develop, and optimize communication collectives (e.g., AllReduce, AllGather, Broadcast, ReduceScatter) for large-scale distributed computing and machine learning frameworks.
  • Implement and optimize communication algorithms (ring, tree, butterfly, etc.) tailored for our architectures and multi-node clusters.
  • Ensure low-latency, high-bandwidth communication across multi-GPU setups, supporting interconnects such as PCIe and Infiniband.
  • Collaborate with hardware engineers and other software teams to optimize performance.
  • Implement fault tolerance and scalability mechanisms in distributed systems to handle large-scale workloads.
  • Write unit tests and benchmark tools to validate the performance and correctness of collective operations.
  • Stay current with advancements in hardware and networking technologies to continuously improve the library's performance.

Requirements

  • Strong understanding of GPU architectures (CUDA, AMD ROCm) and experience in GPU programming (CUDA, HIP, or similar).
  • Proficiency in designing and implementing parallel and distributed algorithms, particularly communication collectives.
  • Experience with network interconnects (NVLink, PCIe, Infiniband, RDMA) and understanding of their performance implications.
  • Hands-on experience with communication collectives libraries like UCC, NCCL, or MPI.
  • Strong knowledge of concurrency, synchronization, and memory consistency models in multi-threaded and distributed environments.
  • Experience with profiling and optimizing low-level performance (memory bandwidth, latency, throughput) on GPU architectures.
  • Familiarity with deep learning frameworks (TensorFlow, PyTorch, etc.) and their use of communication collectives.
  • Strong problem-solving skills and ability to work in a fast-paced, collaborative environment.
  • Network driver experience recommended
  • Excellent skills in problem solving, written and verbal communication
  • Strong organization skills, and highly self-motivated.
  • Ability to work well in a team and be productive under aggressive schedules.

Optional Requirements

  • Experience with NumPy, PyTorch, TensorFlow or JAX
  • Experience with Rust
  • Experience with CUDA, OpenCL, OpenGL, or SYCL
  • Coursework or experience with Machine Learning algorithms

Education and Experience

  • Bachelor’s, Master’s, or PhD in Computer Engineering, Software Engineering or Computer Science

Top Skills

Amd Rocm
Cuda
Hip
The Company
HQ: Mountain View, CA
287 Employees
On-site Workplace
Year Founded: 2021

What We Do

Rivos, a high performance RISC-V System Startup targeting integrated system solutions for Enterprise

Jobs at Similar Companies

Cencora Logo Cencora

Firewall Architect - Network (remote)

Healthtech • Logistics • Software • Pharmaceutical
Remote
Georgia, USA
46000 Employees
Louisville, CO, USA
69 Employees
80K-134K Annually

Similar Companies Hiring

TrainHeroic (A Peaksware Company) Thumbnail
Software • Fitness
Louisville, CO
23 Employees
TrainingPeaks (A Peaksware Company) Thumbnail
Software • Fitness
Louisville, CO
69 Employees
Cencora Thumbnail
Software • Pharmaceutical • Logistics • Healthtech
Conshohocken, PA
46000 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account