Responsibilities
- Design and lead the development of the deployment infrastructure of the CentML platform. The deployment infrastructure manages the hardware resources necessary to deploy the ML training and inference applications.
- Implementing GPU cluster scheduling solutions for large scale ML training and inference workloads to efficiently utilize the hardware resources in the GPU cluster.
- Communicate with our product teams and define new features and goals for improving the CentML platform.
Qualifications
- 4+ years of experience working with containerized deployment systems (e.g, kubernetes, openshift, terraform etc.).
- A big plus if you have contributed to kubernetes and have expertise in container runtime technologies like docker engine, containerd, or CRI-O
- Experience with deploying and managing cloud infrastructure on AWS, GCP, Azure
- Past experience in building GPU clusters for large scale ML training and inference is desirable.
- Knowledge in GPU architecture and Nvidia GPU virtualization technologies is highly desirable.
- Strong coding skills in languages like Python, Java, Go, and/or C/C++.
Similar Jobs
What We Do
We pioneer novel technology to enhance computing efficiency, making AI accessible for innovation and to benefit the global community.
We believe honesty builds integrity, honing craftsmanship delivers excellence, and collaboration fosters community.
Why Work With Us
Our journey began in the esteemed Efficient Computing Systems lab at the University of Toronto, under the leadership of our CEO, Gennady Pekhimenko. Today, the EcoSystems lab stands proudly as one of the world’s foremost authorities in Machine Learning Systems.
Our founding team is made up of experts in AI, ML compilers and ML hardware and has led
Gallery

.png)






