We are seeking an AI Researcher with deep experience in inference optimization to design, evaluate, and deploy high-performance inference systems for large-scale machine learning models. You will work at the intersection of model architecture, systems engineering, and hardware-aware optimization, improving latency, throughput, and cost efficiency across real-world production environments.
Key ResponsibilitiesResearch and develop techniques to optimize inference performance for large neural networks.
Improve latency, throughput, memory efficiency, and cost per inference.
Design and evaluate model-level optimizations (quantization, pruning, KV-cache optimization, architecture-aware simplifications).
Implement systems-level optimizations (dynamic batching, kernel fusion, multi-GPU inference, prefill vs decode optimization).
Benchmark inference workloads across hardware accelerators.
Collaborate with engineering teams to deploy optimized inference pipelines.
Translate research insights into production-ready improvements.
Strong background in machine learning, deep learning, or AI systems.
Hands-on experience optimizing inference for large-scale models.
Proficiency in Python and modern ML frameworks (e.g., PyTorch).
Experience with inference tooling (e.g., Triton, TensorRT, vLLM, ONNX Runtime).
Ability to design experiments and communicate results clearly.
Experience deploying production inference systems at scale.
Familiarity with distributed and multi-GPU inference.
Experience contributing to open-source ML or inference frameworks.
Authorship or co-authorship of peer-reviewed research papers in machine learning, systems, or related fields.
Experience working close to hardware (CUDA, ROCm, profiling tools).
Measurable gains in latency, throughput, and cost efficiency.
Optimized inference systems running reliably in production.
Research ideas successfully translated into deployable systems.
Clear benchmarks and documentation that inform product decisions.
Long-context inference optimization
Speculative decoding
KV-cache compression and paging
Efficient decoding strategies
Hardware-aware inference design
Top Skills
What We Do
We enable serverless inference via our GPU orchestration and model load-balancing system. We unlock fine-tuning by enabling organizations to size their server fleet to throughput needs, not number of models in the catalogue.
See it in action on our public cloud, which offers inference for 10k+ open weight models.






