We’re looking for an AI Researcher focused on training optimization to help us push the efficiency, stability, and scalability of large-scale model training. You’ll work at the intersection of research and systems, developing novel techniques to reduce training cost, accelerate convergence, and improve model quality—while validating ideas through rigorous experiments and publications.
This role is ideal for someone who enjoys turning research insights into practical training wins, and who has a track record (or strong ambition) of publishing applied ML research.
What You’ll Work OnDesign and evaluate training optimization techniques for large models (e.g. optimization algorithms, schedulers, normalization, curriculum strategies)
Improve training efficiency and stability across long runs and large datasets
Research and implement methods such as:
Optimizer and scheduler innovations
Mixed-precision, low-precision, and memory-efficient training
Gradient noise reduction, scaling laws, and convergence analysis
Training-time regularization and robustness techniques
Run large-scale experiments, analyze results, and translate findings into actionable improvements
Author or co-author research papers, technical reports, or blog posts
Collaborate closely with infrastructure and inference teams to ensure training decisions translate to real-world performance
Strong background in machine learning research, with emphasis on training dynamics and optimization
Experience training large neural networks (LLMs, multimodal models, or large sequence models)
Publication experience in ML venues (e.g. NeurIPS, ICML, ICLR, ACL, EMNLP, COLM, arXiv) or equivalent high-quality open research
Solid understanding of:
Optimization theory and practice
Backpropagation, gradient flow, and training stability
Distributed and large-batch training
Proficiency in Python and modern ML frameworks (PyTorch preferred)
Ability to independently design experiments and reason from data
Experience with non-standard architectures (e.g. RNN variants, long-context models, hybrid systems)
Experience optimizing training on GPUs at scale (FSDP, ZeRO, custom kernels)
Contributions to open-source ML or research codebases
Comfort operating in fast-moving, ambiguous startup environments
Real influence over core model training decisions
Freedom to pursue and publish novel research
Direct access to large-scale experiments and real production constraints
A small, senior team that values thinking deeply and shipping thoughtfully
Top Skills
What We Do
We enable serverless inference via our GPU orchestration and model load-balancing system. We unlock fine-tuning by enabling organizations to size their server fleet to throughput needs, not number of models in the catalogue.
See it in action on our public cloud, which offers inference for 10k+ open weight models.

.png)





