Inference Software Engineer

Reposted 3 Days Ago
Be an Early Applicant
San Jose, CA
In-Office
175K-275K Annually
Mid level
Artificial Intelligence • Hardware • Software
The Role
You will port models to new architectures, optimize runtime and communication layers, and develop performance profiling tools for inference operations.
Summary Generated by Built In

About Etched

Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history.

Job Summary

Etched’s Inference SW team enables optimal mapping of models to Sohu’s dataflow architecture and serving requests across multiple chips, hosts and racks. We are seeking a highly skilled and motivated engineer to join our team as we work towards enabling Mixture-of-Experts (MoE) architectures on Sohu systems. You’ll build SW enabling frontier inference performance to satisfy exponentially growing serving demand.

This role is for a general contributor and will be expected to contribute to all parts of our stack. We also have more specialized needs for this team posted on the site.

Key responsibilities

  • Support porting state-of-the-art models to our architecture. Help build programming abstractions and testing capabilities to rapidly iterate on model porting

  • Scale and enhance Sohu’s runtime, including multi-node inference, intra-node execution, state management, and robust error handling

  • Optimize routing and communication layers using Sohu’s collectives

  • Develop tools for performance profiling and debugging, identifying bottlenecks and correctness issues

You may be a good fit if you have

  • Proficiency in Rust and/or C++

  • Good familiarity with PyTorch and/or JAX.

  • Good familiarity with transformers architectures

  • Ported applications to non-standard or accelerator hardware platforms.

  • Solid systems knowledge, including Linux internals, accelerator architectures (e.g., GPUs, TPUs), and high-speed interconnects (e.g., NVLink, InfiniBand)

Strong candidates may also have experience with

  • Developed low-latency, high-performance applications using both kernel-level and user-space networking stacks.

  • Deep understanding of distributed systems concepts, algorithms, and challenges, including consensus protocols, consistency models, and communication patterns.

  • Solid grasp of large language model architectures, particularly Mixture-of-Experts (MoE).

  • Experience analyzing performance traces and logs from distributed systems and ML workloads.

  • Built applications with extensive SIMD (Single Instruction, Multiple Data) optimizations for performance-critical paths.

  • Familiar with cluster orchestration tools (e.g., Kubernetes, Slurm) and ML platforms (e.g., Ray, Kubeflow)

  • Experience designing and implementing CI/CD pipelines for MLOps workflows.

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to West San Jose

Compensation Range

  • $175,000 - $275,000

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Top Skills

C++
Infiniband
Jax
Kubeflow
Kubernetes
Linux
Nvlink
PyTorch
Ray
Rust
Slurm
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Cupertino, CA
53 Employees
Year Founded: 2022

What We Do

By burning the transformer architecture into our chips, we’re creating the world’s most powerful servers for transformer inference.

Similar Jobs

CoreWeave Logo CoreWeave

Senior Software Engineer

Cloud • Information Technology • Machine Learning
In-Office
2 Locations
1450 Employees
165K-242K
In-Office
2 Locations
402 Employees

Anthropic Logo Anthropic

Software Engineer

Artificial Intelligence • Natural Language Processing • Generative AI
In-Office
3 Locations
57 Employees
300K-485K Annually

OpenAI Logo OpenAI

Software Engineer

Artificial Intelligence • Machine Learning • Generative AI
In-Office
San Francisco, CA, USA
224 Employees
325K-490K Annually

Similar Companies Hiring

Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account