Applied ML Engineer, Data

Posted Yesterday
28 Locations
In-Office or Remote
200K-260K Annually
Mid level
Artificial Intelligence • Software
The Role
As an Applied ML Engineer, you will build and scale data pipelines for video generation, ensuring data quality and optimizing preprocessing workflows.
Summary Generated by Built In

About Cantina:

Cantina Labs is a social AI company, developing a suite of advanced real-time models that push the boundaries of expression, personality, and realism. We bring characters to life, transforming how people tell stories, connect, and create. We build and power ecosystems. Cantina, our flagship social AI platform, is just the beginning.

If you're excited about the potential AI has to shape human creativity and social interactions, join us in building the future!

About the Role:

We are looking for an Applied ML Engineer to build and scale the data pipelines behind our large video generation models. This role is focused on collecting large amounts of relevant video data, preparing high-quality training samples, and developing robust preprocessing, filtering, and parsing workflows. You'll orchestrate annotation pipelines across platforms such as MTurk and own the full lifecycle of training data, from raw ingestion to clean, model-ready samples that directly drive quality improvements. This role sits at the intersection of data engineering and ML research, making it central to how we turn messy real-world data into the fuel that moves our models forward.

What You’ll Do:

  • Build and maintain data pipelines for large video generation models, including data ingestion, parsing, filtering, preprocessing, and dataset curation at scale, using tools such as AWS S3 and DynamoDB.

  • Design and run annotation workflows across platforms such as MTurk, Prolific, and Mechanical Turk, including task design, quality control, and label validation.

  • Train, evaluate, and improve smaller supporting models used for data filtering, quality assessment, preprocessing, or other parts of the ML pipeline.

  • Partner closely with research and engineering teams to turn experimental workflows into scalable, repeatable systems that support model training and evaluation.

  • Own data quality across the pipeline by identifying bottlenecks, failure modes, and low-quality sources, and continuously improving tooling and processes.

  • Build internal tools and automation that make it easier to prepare datasets, launch annotation jobs, monitor outputs, and support model development end to end.

  • Drive larger pipeline projects from start to finish, such as new dataset creation efforts or upgrades to labeling and preprocessing infrastructure.

  • Work within a Kubernetes-based training infrastructure, ensuring datasets are properly prepared, formatted, and delivered to training clusters.

  • Profile and optimize research model inference scripts used in preprocessing steps, ensuring that model-driven filtering and transformation stages run within practical time and cost constraints when applied to large-scale raw data.

What You’ll Bring:

  • 3+ years of experience in machine learning, applied ML, data pipelines, or related engineering roles, ideally working on large-scale multimodal, video, or vision-based systems.

  • Strong programming skills in Python and solid experience building reliable data processing and preprocessing pipelines for ML workflows.

  • Hands-on experience preparing training data for ML models, including parsing, filtering, dataset curation, quality control, and large-scale data handling using tools such as AWS S3 and DynamoDB.

  • Familiarity with annotation and labeling workflows, including task design, vendor or crowd-platform orchestration such as MTurk or Prolific, and methods for ensuring label quality.

  • Experience working with Kubernetes for orchestrating distributed workloads, including data preprocessing, pipeline execution, and dataset delivery to training clusters.

  • Comfort working across cloud and on-demand compute environments such as AWS and RunPod, with the ability to port and optimize pipelines across infrastructure.

  • Familiarity with distributed data processing frameworks and experience designing systems that operate reliably at scale across many nodes or workers.

  • Working knowledge of PyTorch and the broader deep learning stack, with the ability to read, debug, and optimize research model inference code for use in production preprocessing pipelines.

  • Ability to work cross-functionally with research and engineering teams and translate experimental ideas into robust, scalable systems.

  • Bachelor's, Master's, or PhD in Computer Science, Machine Learning, Engineering, Mathematics, or a related technical field; experience in generative video, computer vision, or multimodal ML is strongly preferred.

  • Bonus: Experience training, evaluating, or fine-tuning smaller ML models used for classification, filtering, ranking, quality assessment, or other supporting tasks in an ML pipeline.

Compensation:

The anticipated annual base salary range for this role is between $200,000-$260,000 (€170,000-€225,000). When determining compensation, a number of factors will be considered, including skills, experience, job scope, location, and competitive compensation market data.

Benefits for U.S.-based roles:

  • Competitive salary and generous company equity

  • Medical, dental, and vision insurance – 99.99% of premiums covered by Cantina

  • 42 days of paid time off, including:

    • 15 PTO days

    • 10 sick days

    • 15 company holidays

    • 2 floating holidays

  • Generous parental leave & fertility support

  • 401(k) retirement savings plan

  • Lifestyle spending account – $500/month to use however you’d like

  • Complimentary lunch and snacks for in-office employees

  • One Medical membership, and more!

Top Skills

Aws S3
DynamoDB
Kubernetes
Mturk
Prolific
Python
PyTorch
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, California
364 Employees
Year Founded: 2023

What We Do

Cantina Labs, founded by Sean Parker, is a new social platform with the most advanced AI character creator. Build, share, and interact with AI bots and your friends directly in the Cantina or across the internet. Cantina bots are lifelike, social creatures, capable of interacting wherever humans go on the internet. Recreate yourself using powerful AI, imagine someone new, or choose from thousands of existing characters. Bots are a new media type that offer a way for creators to share infinitely scalable and personalized content experiences combined with seamless group chat across voice, video, and text.

Similar Jobs

Ericsson Logo Ericsson

Head of BOS Integrated Services Hub 1

Cloud • Information Technology • Internet of Things • Machine Learning • Software • Cybersecurity • Infrastructure as a Service (IaaS)
In-Office or Remote
90 Locations
88000 Employees

Mastercard Logo Mastercard

Director, Specialist Sales

Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
Remote or Hybrid
Greece
38800 Employees

Mastercard Logo Mastercard

Manager, Specialist Sales

Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
Remote or Hybrid
Greece
38800 Employees

WeLocalize Logo WeLocalize

Hebrew Linguist

Machine Learning • Natural Language Processing
In-Office or Remote
8 Locations
2331 Employees

Similar Companies Hiring

Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account