Pod Networking Software Engineer

Reposted 4 Days Ago
Be an Early Applicant
San Jose, CA
In-Office
150K-275K Annually
Mid level
Artificial Intelligence • Hardware • Software
The Role
The Pod Software Engineer develops and optimizes networking solutions for large-scale inference workloads, focusing on RDMA-based communication and performance telemetry within multi-rack clusters.
Summary Generated by Built In

About Etched

Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history.

Job Summary

We are seeking highly motivated and skilled Pod Networking Software Engineers to join our System Software team. This team plays a critical role in developing, qualifying, and optimizing high-performance networking solutions for large-scale inference workloads. As a Pod Software Engineer, you will focus on developing and qualifying software that drives communication amongst Sohu inference nodes in multi-rack inference clusters. You will collaborate closely with kernel, platform, and telemetry teams to push the boundaries of peer-to-peer RDMA efficiency.

Key Responsibilities

  • High Performance Peer to Peer Networking: Design, develop, and implement RDMA based networking peering, supporting high bandwidth, low latency communication across PCIe nodes within and across racks. Includes work across Operating System, kernel drivers, embedded software and system software.

  • Test Development: Develop tests that qualify host processors (x86),. NICs, TORs and device network interfaces for high performance.

  • Burn-in integration: Furnish burn-in teams with tests that represent both real-world use cases and workloads for device to device networking, and extreme-load stress testing. 

  • Performance/Health Telemetry Design: Define the key metrics that system software must collect to maintain high availability and performance under extreme communications workloads.

Representative Projects

  • Analyze performance deviations, optimize network stack configurations, and propose kernel tuning parameters for low-latency, high-bandwidth inference workloads.

  • Design and execute automated qualification tests for RDMA NICs and interconnects across various server configurations.

  • Identify and root-cause firmware, driver, and hardware issues that impact RDMA performance and reliability.

  • Collaborate with ODMs and silicon vendors to validate new RDMA features and enhancements.

  • Implement and validate peer RDMA support for GPU-to-GPU and accelerator-to-accelerator communication.

  • Modify kernel drivers and user-space libraries to optimize direct memory access between inference pods.

  • Profile and benchmark inter-node RDMA latency and bandwidth to improve inference job scaling.

  • Optimize NIC and switch configurations to balance throughput, congestion control, and reliability.

Must-Have Skills and Experience

  • Proficiency in C/C++

  • Proficiency in at least one scripting language (e.g., Python, Bash, Go).

  • Strong experience with device-to-device networking technologies (RDMA, GPUDirect, etc.), including RoCE.

  • Experience with zero-copy networking, RDMA verbs and memory registration.

  • Familiarity with queue pairs, completions queues, and transport types.

  • Strong understanding of operating systems (Linux preferred) and server hardware architectures.

  • Ability to analyze complex technical problems and provide effective solutions.

  • Excellent communication and collaboration skills.   

  • Ability to work independently and as part of a team.

  • Experience with version control systems (e.g., Git).   

  • Experience with reading and interpreting hardware logs.

Nice-to-Have Skills and Experience

  • Experience with networking technologies like NVLink, Infiniband, ML Pod interconnects.

  • Experience with widely deployed Top of Rack Switches (Cisco, Juniper, Arista, etc.)

  • Knowledge of server virtualization.

  • Experience with tracing tools like perf, eBPF, ftrace, etc.

  • Experience with performance testing and benchmarking tools (gProf, vTune, Wireshark, etc.).

  • Familiarity with hardware diagnostic tools and techniques 

  • Experience with containerization technologies (e.g., Docker, Kubernetes).

  • Experience with CI/CD pipelines.

  • Experience with Rust.

Ideal Background

  • Candidates who have worked on GPU or TPU pods, specifically in the networking domain.

  • Candidates who understand up-time challenges of very big ML deployments.

  • Candidates who have actively debugged complex network topologies, specifically dealing with cases of node dropouts/failures, route-arounds, and pod resiliency at large.

  • Candidates must understand performance implications of Pod Networking SW.

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to West San Jose

Compensation Range

  • $150,000 - $275,000 

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Top Skills

Bash
C/C++
Docker
Git
Go
Gpudirect
Kubernetes
Linux
Python
Rdma
Roce
Rust
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Cupertino, CA
53 Employees
Year Founded: 2022

What We Do

By burning the transformer architecture into our chips, we’re creating the world’s most powerful servers for transformer inference.

Similar Jobs

Pfizer Logo Pfizer

Senior Director, Data Platforms and Engineering

Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
Hybrid
37 Locations
121990 Employees
198K-366K Annually

Navan Logo Navan

Senior Recruiter

Fintech • Information Technology • Payments • Productivity • Software • Travel • Automation
Easy Apply
Hybrid
3 Locations
3000 Employees
86K-155K Annually

Jabra Hearing Logo Jabra Hearing

Design Engineer

eCommerce • Healthtech • Mobile • Wearables
In-Office
Cupertino, CA, USA
180 Employees
120K-190K Annually

Jabra Hearing Logo Jabra Hearing

Product Manager

eCommerce • Healthtech • Mobile • Wearables
In-Office or Remote
3 Locations
180 Employees
115K-155K Annually

Similar Companies Hiring

Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account