Machine Learning Co-Design Researcher

Reposted 25 Days Ago
Be an Early Applicant
San Jose, CA
In-Office
150K-275K Annually
Senior level
Artificial Intelligence • Hardware • Software
The Role
The role involves co-designing hardware instructions and optimizing transformer model operations for AI chip performance, collaborating across software and hardware teams.
Summary Generated by Built In

About Etched

Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history.

Key responsibilities

  • Translate core mathematical operations from transformer models into optimized operation sequences for Sohu

  • Develop and leverage a deep understanding of Sohu to co-design both HW instructions and model architecture operations to maximize model performance

  • Implement high-performance software components for the Model Toolkit

  • Collaborate with hardware engineers to maximize chip utilization and minimize latency

  • Implement efficient batching strategies and execution plans for inference workloads

  • Design and implement cutting edge inference time compute scaling methods

  • Alter and fine-tune model architectures or inference time compute algorithms

  • Contribute to the evolution of our system architecture and programming model

Representative projects

  • Optimize operation sequences to maximize Sohu's computational resources for specific transformer architectures such as Llama 4.

  • Research and implement efficient memory management for KV cache sharing and prefix optimization

  • Develop algorithms for continuous batching and batch interleaving to improve throughput and/or latency

  • Research and implement model-specific inference-time acceleration algorithms such as speculative decoding, tree search, KV cache sharing, priority scheduling, etc by interacting with the rest of the inference serving stack

  • Research and implement structured decoding and novel sampling algorithms for reasoning models

You may be a good fit if you have

  • Co-design expertise across both SW and HW domains

  • Strong software engineering skills with systems programming experience

  • Deep knowledge of transformer model architectures and/or inference serving stacks (vLLM, SGLang, etc.)

  • Strong mathematical skills, esp. in linear algebra

  • Ability to reason about performance bottlenecks and optimization opportunities

  • Experience working cross-functionally in diverse software and hardware organizations

Strong candidates may also have experience with

  • Experience with hardware accelerators, ASICs, or FPGAs

  • Experience with Rust programming language

  • Deep expertise in ML systems engineering and hardware/software co-design with demonstrated impact (contributions to open-source projects or published papers)

  • Track record of optimizing large co-designed SW / HW systems

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to West San Jose

Compensation Range

  • $150,000 - $275,000 

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Top Skills

AI
Asic
Fpga
Hardware Accelerators
Linear Algebra
Rust
Software Engineering
Transformer Architectures
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Cupertino, CA
53 Employees
Year Founded: 2022

What We Do

By burning the transformer architecture into our chips, we’re creating the world’s most powerful servers for transformer inference.

Similar Jobs

Collectors Logo Collectors

Continuous Improvement Supervisor

Consumer Web • eCommerce • Machine Learning • Professional Services • Software • Sports • Analytics
In-Office
Santa Ana, CA, USA
2246 Employees
70K-94K Annually

Collectors Logo Collectors

Senior Manager, Continuous Improvement

Consumer Web • eCommerce • Machine Learning • Professional Services • Software • Sports • Analytics
In-Office
Santa Ana, CA, USA
2246 Employees
115K-194K Annually

Collectors Logo Collectors

Staff Accountant

Consumer Web • eCommerce • Machine Learning • Professional Services • Software • Sports • Analytics
In-Office
Santa Ana, CA, USA
2246 Employees
30-40 Hourly

Collectors Logo Collectors

Staff Software Engineer

Consumer Web • eCommerce • Machine Learning • Professional Services • Software • Sports • Analytics
In-Office
Santa Ana, CA, USA
2246 Employees
209K-259K Annually

Similar Companies Hiring

PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account