Key Responsibilities
- Architect and optimize ML Platforms to support cutting-edge machine learning and deep learning models.
- Collaborate closely with cross-functional teams to translate business objectives into scalable engineering solutions.
- Lead the end-to-end development and operation of high-performance, cost-effective inference systems for a diverse range of models, including state-of-the-art large language models (LLMs).
- Provide technical leadership and mentorship to cultivate a high-performing engineering team.
- Develop CI/CD workflows for ML models and data pipelines using tools like Cloud Build, GitHub Actions, or Jenkins.
- Automate model training, validation, and deployment across development, staging, and production environments.
- Monitor and maintain ML models in production using Vertex AI Model Monitoring, logging (Cloud Logging), and performance metrics.
- Ensure reproducibility and traceability of experiments using ML metadata tracking tools like Vertex AI Experiments or MLflow.
- Manage model versioning and rollbacks using Vertex AI Model Registry or custom model management solutions.
- Collaborate with data scientists and software engineers to translate model requirements into robust and scalable ML systems.
- Optimize model inference infrastructure for latency, throughput, and cost efficiency using GCP services such as Cloud Run, Kubernetes Engine (GKE), or custom serving frameworks.
- Implement data and model governance policies, including auditability, security, and access control using IAM and Cloud DLP.
- Stay current with evolving GCP MLOps practices, tools, and frameworks to continuously improve system reliability and automation.
Qualifications
- Technical degree: Bachelor's degree in Computer Science with a minimum of 6 + years of relevant industry experience
- A Master's degree in Computer Science with at least 4 + years of relevant industry experience. Proven experience in implementing MLOps solutions on Google Cloud Platform (GCP) using services such as Vertex AI, Cloud Storage, BigQuery, Cloud Functions, and Dataflow.
- Proven experience in building and scaling agentic AI systems in production environments.
- Hands-on experience with leading deep learning frameworks such as TensorFlow, Pytorch, HuggingFace, Langchain, etc.
- Solid foundation in machine learning algorithms, natural language processing, and statistical modeling.
- Strong grasp of fundamental computer science concepts including algorithms, distributed systems, data structures, and database management.
- Ability to tackle complex challenges and devise effective solutions. Use critical thinking to approach problems from various angles and propose innovative solutions.
- Worked effectively in a remote setting, maintaining strong written and verbal communication skills. Collaborate with team members and stakeholders, ensuring clear understanding of technical requirements and project goals.
Travel
- Travel as per business requirements
Sponsorship
- Candidate must be legally able to work for any employer in the US
- This role is not sponsorship eligible
Similar Jobs
What We Do
At Rackspace Technology, we accelerate the value of the cloud during every phase of digital transformation. By managing apps, data, security and multiple clouds, we are the best choice to help customers get to the cloud, innovate with new technologies and maximize their IT investments. As a recognized Gartner Magic Quadrant leader, we are uniquely positioned to close the gap between the complex reality of today and the promise of tomorrow. Passionate about customer success, we provide unbiased expertise, based on proven results, across all the leading technologies. And across every interaction worldwide, we deliver Fanatical Experience TM — the best customer service experience in the industry. Rackspace has been honored by Fortune, Forbes, Glassdoor and others as one of the best places to work.

%20copy.jpg)





