Machine Learning Co-Design Researcher

Reposted 17 Days Ago
Be an Early Applicant
San Jose, CA
In-Office
150K-275K
Senior level
Artificial Intelligence • Hardware • Software
The Role
The role involves co-designing hardware instructions and optimizing transformer model operations for AI chip performance, collaborating across software and hardware teams.
Summary Generated by Built In

About Etched

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents.

Key responsibilities

  • Translate core mathematical operations from transformer models into optimized operation sequences for Sohu

  • Develop and leverage a deep understanding of Sohu to co-design both HW instructions and model architecture operations to maximize model performance

  • Implement high-performance software components for the Model Toolkit

  • Collaborate with hardware engineers to maximize chip utilization and minimize latency

  • Implement efficient batching strategies and execution plans for inference workloads

  • Design and implement cutting edge inference time compute scaling methods

  • Alter and fine-tune model architectures or inference time compute algorithms

  • Contribute to the evolution of our system architecture and programming model

Representative projects

  • Optimize operation sequences to maximize Sohu's computational resources for specific transformer architectures such as Llama 4.

  • Research and implement efficient memory management for KV cache sharing and prefix optimization

  • Develop algorithms for continuous batching and batch interleaving to improve throughput and/or latency

  • Research and implement model-specific inference-time acceleration algorithms such as speculative decoding, tree search, KV cache sharing, priority scheduling, etc by interacting with the rest of the inference serving stack

  • Research and implement structured decoding and novel sampling algorithms for reasoning models

You may be a good fit if you have

  • Co-design expertise across both SW and HW domains

  • Strong software engineering skills with systems programming experience

  • Deep knowledge of transformer model architectures and/or inference serving stacks (vLLM, SGLang, etc.)

  • Strong mathematical skills, esp. in linear algebra

  • Ability to reason about performance bottlenecks and optimization opportunities

  • Experience working cross-functionally in diverse software and hardware organizations

Strong candidates may also have experience with

  • Experience with hardware accelerators, ASICs, or FPGAs

  • Experience with Rust programming language

  • Deep expertise in ML systems engineering and hardware/software co-design with demonstrated impact (contributions to open-source projects or published papers)

  • Track record of optimizing large co-designed SW / HW systems

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to West San Jose

Compensation Range

  • $150,000 - $275,000 

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Top Skills

AI
Asic
Fpga
Hardware Accelerators
Linear Algebra
Rust
Software Engineering
Transformer Architectures
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Cupertino, CA
53 Employees
Year Founded: 2022

What We Do

By burning the transformer architecture into our chips, we’re creating the world’s most powerful servers for transformer inference.

Similar Jobs

Anduril Logo Anduril

Application Engineer

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
In-Office
Costa Mesa, CA, USA
98K-130K Annually

Anduril Logo Anduril

Manufacturing Engineer

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
In-Office
Santa Ana, CA, USA
146K-194K Annually
Easy Apply
Hybrid
3 Locations
91K-130K

General Motors Logo General Motors

Technical Program Manager

Automotive • Big Data • Information Technology • Robotics • Software • Transportation • Manufacturing
Hybrid
2 Locations
165K-299K Annually

Similar Companies Hiring

Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account