Why Kumo.ai?
- Work alongside world-class engineers & scientists (ex-Airbnb, Pinterest, LinkedIn, Stanford).
- Be a foundational voice in designing a platform powering enterprise-scale AI.
- Competitive Series B compensation package (salary + meaningful equity).
The Opportunity - The Cloud Infrastructure team builds and operates our Kubernetes-based, multi-cloud AI platform across AWS, Azure, and GCP.
- As a Cloud Infrastructure Engineer, you’ll work on scaling, securing, and optimizing the platform that powers massive multi-tenant clusters running Big Data and AI/ML workloads.
- You’ll collaborate closely with senior engineers, ML scientists, and product teams to deliver automation, improve reliability, and expand our multi-cloud capabilities.
- This role offers the chance to deepen your Kubernetes and cloud expertise while taking ownership of impactful projects.
What You’ll Do
- Deploy, operate, and maintain infrastructure across AWS, Azure, and GCP.
- Build and manage Kubernetes clusters (EKS, AKS, GKE) with a focus on performance, availability, and cost efficiency.
- Develop and maintain automation using Infrastructure-as-Code tools (Terraform, Pulumi, Crossplane).
- Implement and enhance GitOps workflows using Argo CD or Flux.
- Set up and maintain observability systems (Prometheus, Grafana, OpenTelemetry) to monitor workloads and clusters.
- Collaborate with the team to design, test, and roll out improvements to scaling and reliability.
- Troubleshoot incidents and participate in on-call rotations to ensure platform uptime.
- Contribute to security best practices, including RBAC, tenant isolation, and cloud identity management.
What You Bring
- 3–5 years of experience building or operating cloud-native infrastructure in production.
- Hands-on experience with at least one major cloud provider (AWS, Azure, or GCP) and (preferably) exposure to multi-cloud environments.
- Solid understanding of Kubernetes concepts and operational experience with production clusters.
- Proficiency with Infrastructure-as-Code tools (Terraform, Pulumi, or similar).
- Experience with GitOps workflows and tools like Argo CD, Flux, or Argo Workflows.
- Familiarity with monitoring, logging, and tracing for distributed systems.
- Scripting or programming skills in Python, Go, or Bash.
- Strong problem-solving skills and a collaborative approach.
Nice to Have
- Experience with multi-tenant Kubernetes clusters for AI/ML or big data workloads.
- Knowledge of compliance and security standards (SOC2, GDPR, ISO27001).
- Contributions to open-source cloud-native projects.
- Familiarity with Kubernetes operators, controllers, or custom resources.
Top Skills
What We Do
Democratizing AI on the Modern Data Stack!
The team behind PyG (PyG.org) is working on a turn-key solution for AI over large scale data warehouses. We believe the future of ML is a seamless integration between modern cloud data warehouses and AI algorithms. Our ML infrastructure massively simplifies the training and deployment of ML models on complex data.
With over 40,000 monthly downloads and nearly 13,000 Github stars, PyG is the ultimate platform for training and development of Graph Neural Network (GNN) architectures. GNNs -- one of the hottest areas of machine learning now -- are a class of deep learning models that generalize Transformer and CNN architectures and enable us to apply the power of deep learning to complex data. GNNs are unique in a sense that they can be applied to data of different shapes and modalities.








