Inference Software Engineer - Collectives

Reposted 5 Days Ago
Be an Early Applicant
San Jose, CA
In-Office
175K-275K Annually
Senior level
Artificial Intelligence • Hardware • Software
The Role
As an Inference Software Engineer, you'll optimize collectives, enhance Sohu's runtime, and collaborate across systems to achieve optimal inference performance.
Summary Generated by Built In

About Etched

Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history.

Job Summary

Etched’s Inference SW team enables optimal mapping of models to Sohu’s dataflow architecture and serving requests across multiple chips, hosts and racks. We are seeking a highly skilled and motivated engineer to formalize and optimize our collectives (e.g. Send/Recieve, AllReduce, Broadcast, etc.). You’ll build SW enabling frontier inference performance to satisfy exponentially growing serving demand.

In this role, your core focus will be working across systems and research to realize Mixture of Expert (MoE) architectures on Sohu’s system. You will play a key role in scaling out Sohu’s nascent runtime, with a focus on collectives.

Key responsibilities

  • Formalize and optimize our collectives (e.g. Send/Recieve, AllReduce, Broadcast, etc.)

  • Collaborate across systems and research teams to bring MoE architectures to Sohu’s runtime

  • Optimize expert routing and communication layers using Sohu’s collectives

  • Contribute to scaling and enhancing Sohu’s runtime, including multi-node inference, intra-node execution, state management, and robust error handling

  • Develop tools for performance profiling and debugging, identifying bottlenecks and correctness issues

You may be a good fit if you have

  • Strong proficiency in Rust and/or C++; familiarity with PyTorch and/or JAX.

  • Experience designing/optimizing collectives (e.g. NCCL, MPI collectives, XLA collectives, etc.)

  • Strong systems knowledge, including Linux internals, accelerator architectures (e.g., GPUs, TPUs), high-speed interconnects (e.g., NVLink, InfiniBand) and RDMA

  • Solid understanding of distributed systems concepts, algorithms, and challenges, including consensus protocols, consistency models, and communication patterns

  • Experience analyzing performance traces and logs from distributed systems and ML workloads.

  • A knack for designing user-facing interfaces and libraries, and enjoy looking for that elusive optimum between performance and usability.

Strong candidates may also have experience with

  • Large language model architectures, particularly Mixture-of-Experts (MoE).

  • Familiarity with network simulation techniques

  • Developed low-latency, high-performance applications using both kernel-level and user-space networking stacks.

  • Ported applications to non-standard or accelerator hardware platforms.

  • Contributed to runtime systems with complex, well-documented interfaces, such as distributed storage systems or machine learning runtimes.

  • Built applications with extensive SIMD (Single Instruction, Multiple Data) optimizations for performance-critical paths.

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to West San Jose

Compensation Range

  • $175,000 - $275,000

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Top Skills

C++
Jax
Mpi
Nccl
PyTorch
Rust
Xla
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Cupertino, CA
53 Employees
Year Founded: 2022

What We Do

By burning the transformer architecture into our chips, we’re creating the world’s most powerful servers for transformer inference.

Similar Jobs

Wells Fargo Logo Wells Fargo

Teller Full Time Beverly Hills

Fintech • Financial Services
Hybrid
Beverly Hills, CA, USA
213000 Employees
22-28 Hourly

Wells Fargo Logo Wells Fargo

Personal Banker Bilingual Spanish

Fintech • Financial Services
Hybrid
Panorama Heights, CA, USA
213000 Employees
23-31 Hourly
Hybrid
3 Locations
213000 Employees
340K-500K Annually
Hybrid
2 Locations
213000 Employees
37-66 Hourly

Similar Companies Hiring

Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account