AI Researcher — Inference Optimization

Reposted 23 Days Ago
Be an Early Applicant
Hiring Remotely in World Golf Village, FL, USA
In-Office or Remote
Senior level
Artificial Intelligence • Information Technology • Software
The Role
As an AI Researcher, you will optimize inference performance for large-scale machine learning models by improving latency, throughput, and cost efficiency through various model and systems-level optimizations.
Summary Generated by Built In
Role Overview

We are seeking an AI Researcher with deep experience in inference optimization to design, evaluate, and deploy high-performance inference systems for large-scale machine learning models. You will work at the intersection of model architecture, systems engineering, and hardware-aware optimization, improving latency, throughput, and cost efficiency across real-world production environments.

Key Responsibilities
  • Research and develop techniques to optimize inference performance for large neural networks.

  • Improve latency, throughput, memory efficiency, and cost per inference.

  • Design and evaluate model-level optimizations (quantization, pruning, KV-cache optimization, architecture-aware simplifications).

  • Implement systems-level optimizations (dynamic batching, kernel fusion, multi-GPU inference, prefill vs decode optimization).

  • Benchmark inference workloads across hardware accelerators.

  • Collaborate with engineering teams to deploy optimized inference pipelines.

  • Translate research insights into production-ready improvements.

Required Qualifications
  • Strong background in machine learning, deep learning, or AI systems.

  • Hands-on experience optimizing inference for large-scale models.

  • Proficiency in Python and modern ML frameworks (e.g., PyTorch).

  • Experience with inference tooling (e.g., Triton, TensorRT, vLLM, ONNX Runtime).

  • Ability to design experiments and communicate results clearly.

Preferred / Nice-to-Have Qualifications
  • Experience deploying production inference systems at scale.

  • Familiarity with distributed and multi-GPU inference.

  • Experience contributing to open-source ML or inference frameworks.

  • Authorship or co-authorship of peer-reviewed research papers in machine learning, systems, or related fields.

  • Experience working close to hardware (CUDA, ROCm, profiling tools).

What Success Looks Like
  • Measurable gains in latency, throughput, and cost efficiency.

  • Optimized inference systems running reliably in production.

  • Research ideas successfully translated into deployable systems.

  • Clear benchmarks and documentation that inform product decisions.

Relevant Research Areas (Bonus)
  • Long-context inference optimization

  • Speculative decoding

  • KV-cache compression and paging

  • Efficient decoding strategies

  • Hardware-aware inference design

Skills Required

  • Strong background in machine learning, deep learning, or AI systems
  • Hands-on experience optimizing inference for large-scale models
  • Proficiency in Python and modern ML frameworks (e.g., PyTorch)
  • Experience with inference tooling (e.g., Triton, TensorRT, vLLM, ONNX Runtime)
  • Ability to design experiments and communicate results clearly
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, California
20 Employees
Year Founded: 2023

What We Do

We enable serverless inference via our GPU orchestration and model load-balancing system. We unlock fine-tuning by enabling organizations to size their server fleet to throughput needs, not number of models in the catalogue. See it in action on our public cloud, which offers inference for 10k+ open weight models.

Similar Jobs

Zeta Global Logo Zeta Global

Lead Software Engineer

AdTech • Artificial Intelligence • Marketing Tech • Software • Analytics
Easy Apply
Remote or Hybrid
United States
2429 Employees
150K-200K Annually

Toast Logo Toast

Product Marketing Manager

Cloud • Fintech • Food • Information Technology • Software • Hospitality
Remote
United States
5000 Employees
142K-227K Annually

Toast Logo Toast

Senior Software Engineer

Cloud • Fintech • Food • Information Technology • Software • Hospitality
Remote
USA
5000 Employees
159K-254K Annually
Remote or Hybrid
United States
240 Employees
150K-175K Annually

Similar Companies Hiring

Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Golden Pet Brands Thumbnail
Digital Media • eCommerce • Information Technology • Marketing Tech • Pet • Retail • Social Media
El Segundo, California
178 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account