In this role you will build scalable systems to power the Kumo platform, making it THE platform for running Big Data and AI workloads. You will be an early member of the infrastructure team, building the platform that will scale Kumo to handle huge cloud datasets, while maximizing the productivity of dozens of engineers & future customers that will use the platform for many years to come. You will work with a diverse group of ML scientists, infrastructure engineers, product engineers, and leaders to influence how we productionize and scale new ML technologies, build tools to increase our velocity and help us with full-stack user experiences. At Kumo, engineers wear many hats. You will be responsible for designing and building multiple core systems completely from scratch while making key design decisions that greatly influence the product direction. You must also be passionate about foundational infrastructure work, such as managing model lifecycles and ML Ops, CI/CD and advanced packaging, versioning, and deployment strategies.
The Value You'll Add:
- Build and extend components of the core Kumo infrastructure
- Willingness to respond and be a key participant in our incident management process and develop tools for better Root Cause Analysis and reduction in MTTR (mean time to respond)
- Build and automate CI-CD pipelines, and release tooling to support continuous delivery and true zero-downtime deployments across different cloud providers using the latest cloud-native technologies.
- Build the Kumo ML Ops platform, which will be able to data drift, track model versions, report on production model performance, alert the team of any anomalous model behavior, and run programmatic A/B tests on production models.
- You will work on advanced tools developed for the world’s leading cloud-native machine learning engine that uses graph deep learning technology
Your Foundation:
- BS (preferred MS, PhD.) in Computer Science.
- Must Have: B2B Saas experience, architecting experience in building a large-scale distributed system at scale.
- 3+ years of experience writing production code in Java, Javascript, C++, or Python (NO NEW GRADS)
- Experience with productionizing cloud applications, including Docker and Kubernetes, CI/CD and advanced packaging, versioning, and deployment strategies, containers and serverless architecture, online/offline feature stores, model performance monitoring
- Familiarity with popular MLOps tooling from cloud vendors like GCP (Vertex AI), AWS (SageMaker) or Azure Machine Learning and MLFlow, Kubeflow, etc.
- Proficiency with general full-stack application development, such as defining data models, building abstractions for business logic, and developing customer-facing Web Front Ends or public APIs/SDKs for the application.
- Experience with Infrastructure-as-code development (e.g., Terraform, Cloud Formation, Ansible, Chef, Bash scripting, etc.)
- Core understanding of data modeling and fundamentals of data engineering (e.g. integrations/connectors, pipelines, ETL/ELT processes)
Your Extra Special Sauce:
- AWS Advanced Networking/ AWS Security/ DevOps / Solution Architect Professional Certifications
- Working knowledge of OAuth, OIDC, SAML, JWT, and identity and access management
- Proficiency with asynchronous Python frameworks such as Fast API
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Top Skills
What We Do
Democratizing AI on the Modern Data Stack!
The team behind PyG (PyG.org) is working on a turn-key solution for AI over large scale data warehouses. We believe the future of ML is a seamless integration between modern cloud data warehouses and AI algorithms. Our ML infrastructure massively simplifies the training and deployment of ML models on complex data.
With over 40,000 monthly downloads and nearly 13,000 Github stars, PyG is the ultimate platform for training and development of Graph Neural Network (GNN) architectures. GNNs -- one of the hottest areas of machine learning now -- are a class of deep learning models that generalize Transformer and CNN architectures and enable us to apply the power of deep learning to complex data. GNNs are unique in a sense that they can be applied to data of different shapes and modalities.