LLMOps Architect

Reposted 23 Hours Ago
Be an Early Applicant
Bengaluru, Karnataka
In-Office
Senior level
Natural Language Processing • Analytics • Design
The Role
The Head of MLOps will design ML platforms, manage LLM models, implement CI/CD pipelines, optimize cloud usage, and mentor teams, focusing on cost-effectiveness and resilience.
Summary Generated by Built In
About Enterpret

Enterpret is at the forefront of AI-native applications, unlocking the power of customer feedback for businesses. We centralize feedback from every channel and transform it into actionable insights that drive customer-centric decisions for teams at the world's leading companies like Perplexity, Notion, Canva, and Figma. Backed by investors such as Kleiner Perkins and Peak XV, we're redefining how businesses understand and act on the voice of their customers.

About the Role

As LLMOps Architect at Enterpret, you'll be responsible for how LLM models are fine-tuned, how prompts are managed, how we run evals, how we optimize for cost, and how we optimize for speed—both at the experimentation stage and in production. This is a foundational, high-ownership role where you'll work directly with the OpenAI and Anthropic teams (whom we partner with closely), as well as AWS (whom we partner with closely), to build world-class ML infrastructure.

You'll work closely with ML researchers, backend engineers, and product teams to ensure our AI systems are resilient, secure, and cost-efficient as we grow 10x. Key success metrics include improving the speed of experimentation, time to productionization, and the quality of models. You'll report directly to the CTO.

What You'll Do
  • Design and evolve Enterpret's ML platform for training, serving, and retraining our encoders and LLM models using AWS/Terraform/OpenAI/Anthropic.

  • Build CI/CD pipelines tailored for ML—including model versioning, testing, canary releases, rollbacks, and gated production deploys.

  • Deploy and manage model serving systems for both real-time inference (e.g., tagging support tickets on the fly) and batch pipelines (e.g., analyzing historical product feedback).

  • Set up observability for model performance and data drift—using Braintrust and custom alerts to catch issues before they affect customers.

  • Lead incident response, root cause analysis, and postmortems for ML systems—ensuring uptime for insights that product teams rely on, alongside governance and security.

  • Track and optimize cloud usage for ML workflows, making model delivery cost-aware and aligned with product usage.

  • Implement governance and security across the stack—owning IAM, data access, auditability, and model explainability where needed.

  • Partner with ML and product teams to productionize GenAI and AI models powering our Knowledge Graph and Adaptive Taxonomy engine, tackling problems on retrieval, encoder LLM fine-tuning, and reinforcement learning.

  • Evaluate tools for model registry, feature stores, and orchestration—and build where needed to keep the feedback loop fast.

  • Champion best practices in MLOps across the org—mentoring engineers and setting scalable foundations for the future.

  • Act as a coach to our team of researchers who are transitioning into engineering, helping them self-serve their capabilities and self-service these tools rather than doing it yourself.

What It Takes
  • A minimum of 6 years' experience in MLOps and ML infrastructure, ideally with exposure to designing, deploying, and scaling machine learning systems in fast-paced, product-driven environments such as startups or high-growth companies.

  • Deep expertise with AWS (SageMaker, EC2, EKS, S3, IAM), infrastructure-as-code (Terraform), and container orchestration (Docker, Kubernetes).

  • Strong Python skills, with bonus points for Go, Bash, or Rust scripting where appropriate.

  • Hands-on experience with CI/CD systems like GitHub Actions, ArgoCD, or Jenkins—especially for ML model delivery.

  • Proven ability to monitor and maintain production ML systems, including model drift, latency, uptime, and alerting.

  • Comfort with cloud cost optimization, resource provisioning, and auto-scaling for ML-heavy environments.

  • Familiarity with model serving stacks and experimentation tools (MLflow, Langsmith, etc.).

  • Bonus: exposure to GenAI workflows (LangChain, vector DBs, RAG), encoder/LLM model tuning, reinforcement learning, or responsible AI practices.

  • Track record of mentoring, collaborating across functions, and taking full ownership of systems in production.

  • You hate repeated manual work and have a strong drive to automate everything.

  • Proficiency with AI coding agents like Claude and Cursor to work multiple times more effectively than normal.

Why Enterpret
  • ML at the Core: You won't be supporting ML—you'll be enabling the core product.

  • High Impact, Early Ownership: Define our MLOps foundation, influence every model's path to production, and shape how product teams experience insights.

  • Work With Sharp People: Collaborate with researchers, engineers, and product builders solving complex problems every week.

  • Focused, Fast Environment: No heavy process—just smart, principled builders shipping high-quality infrastructure.

  • Comp + Culture: Competitive salary, meaningful equity, full-stack healthcare, generous leave, and a team-first culture built on trust and ownership.

What We Value

At Enterpret, we operate with a deep sense of ownership — we play for the team and do what it takes to win together. We care personally for our teammates while pushing each other with honest, actionable feedback. Above all, we approach everything with humility and a drive to keep learning and getting better.


Equal Opportunities

We are an equal opportunity employer. We ensure that none of our employees or prospective employees receives less favourable treatment as a result of age, sex, disability, marital status, colour, race, religion or ethnic origin. Equally we aim to ensure that no such employee is disadvantaged by terms and conditions of employment which cannot be justified.


Top Skills

Aws,Sagemaker,Ec2,Eks,S3,Iam,Terrraform,Docker,Kubernetes,Python,Go,Bash,Rust,Github Actions,Argocd,Jenkins,Mlflow,Langsmith
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
San Francisco, CA
31 Employees
Year Founded: 2020

What We Do

Enterpret enables companies to analyse their customer feedback at scale. We are on a mission to help companies build better products by uncovering insights from their customer feedback.

We are solving complex problems in API design, analytics UI/UX and natural language processing, pushing the envelope of what's possible by applying first principle thinking.

Reach out to us at [email protected]

Similar Jobs

Capital One Logo Capital One

Senior Manager- Fullstack Engineering

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
Bengaluru, Bengaluru Urban, Karnataka, IND
Hybrid
Bengaluru, Karnataka, IND

HERE Technologies Logo HERE Technologies

Business Operations Analyst

Artificial Intelligence • Automotive • Computer Vision • Information Technology • Internet of Things • Logistics • Software
Hybrid
Bangalore, Bengaluru, Karnataka, IND

Similar Companies Hiring

EDGE Thumbnail
Software • Fintech • Financial Services • Analytics
Chicago, IL
22 Employees
Prolaio Thumbnail
Wearables • Mobile • Healthtech • Big Data • Artificial Intelligence • Analytics
Chicago, IL
62 Employees
Northslope Technologies Thumbnail
Software • Information Technology • Generative AI • Consulting • Artificial Intelligence • Analytics
Denver, CO
60 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account