What you’ll do
- Design and implement highly efficient distributed training systems for large-scale deep learning models.
- Optimize parallelism strategies to improve performance and scalability across hundreds or thousands of GPUs.
- Develop low-level systems components and algorithms to maximize throughput and minimize memory and compute bottlenecks.
- Productionize the training systems onto CentML Platform.
- Collaborate with researchers and engineers to productionize cutting-edge model architectures and training techniques.
- Contribute to the design of APIs, abstractions and UX that make it easier to scale models while maintaining usability and flexibility.
- Profile, debug, and tune performance at the system, model, and hardware levels.
- Participate in design discussions, code reviews, and technical planning to ensure the product aligns with business goals.
- Stay up to date with the latest advancements in large-scale model training and help translate research into practical, robust systems.
What you’ll need to be successful
- Bachelor’s, Master’s, or PhD’s degree in Computer Science/Engineering, Software Engineering, related field or equivalent working experience.
- 3+ years of experience in software development, preferably with Python and C++.
- Deep understanding of machine learning pipelines and workflows, distributed systems, parallel computing, and high-performance computing principles.
- Hands-on experience with large-scale training of deep learning models using frameworks like PyTorch, Megatron Core, DeepSpeed.
- Experience optimizing compute, memory, and communication performance in large model training workflows.
- Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.
- Solid grasp of deep learning fundamentals, especially as they relate to transformer-based architectures and training dynamics.
- Experience working with cloud platforms (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes).
- Ability to work closely with both research and engineering teams, translating evolving needs into robust infrastructure.
- Excellent problem-solving skills, with the ability to debug complex systems.
- A passion for building high-impact tools that push the boundaries of what’s possible with large-scale AI.
Bonus points if you have
- Experience building tools or platforms for ML model training or fine-tuning.
- Experience building backends (e.g., using FastAPI) and frontend (e.g., using React).
- Experience building and optimizing LLM inference engines (e.g., vLLM, SGLang).
- Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.
- Familiarity with MLOps concepts, including model versioning and serving.
Similar Jobs
What We Do
We pioneer novel technology to enhance computing efficiency, making AI accessible for innovation and to benefit the global community.
We believe honesty builds integrity, honing craftsmanship delivers excellence, and collaboration fosters community.
Why Work With Us
Our journey began in the esteemed Efficient Computing Systems lab at the University of Toronto, under the leadership of our CEO, Gennady Pekhimenko. Today, the EcoSystems lab stands proudly as one of the world’s foremost authorities in Machine Learning Systems.
Our founding team is made up of experts in AI, ML compilers and ML hardware and has led
Gallery







