Member of Technical Staff - ML Research Engineer; Multi-Modal - Vision

Reposted Yesterday
2 Locations
In-Office
Mid level
Artificial Intelligence • Information Technology
The Role
Work on designing and optimizing AI models, collaborate on multimodal data systems, contribute to research, and enhance model performance across various platforms.
Summary Generated by Built In
Work With Us

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.

We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.

This Role Is For You If:
  • You have experience with machine learning at scale

  • You’re proficient in PyTorch, and familiar with distributed training frameworks like DeepSpeed, FSDP, or Megatron-LM

  • You’ve worked with multimodal data (e.g., image-text, video, visual documents, audio)

  • You’ve contributed to research papers, open-source projects, or production-grade multimodal model systems

  • You understand how data quality, augmentations, and preprocessing pipelines can significantly impact model performance—and you’ve built tooling to support that

  • You enjoy working in interdisciplinary teams across research, systems, and infrastructure, and can translate ideas into high-impact implementations

Desired Experience:
  • You’ve designed and trained Vision Language Models

  • You care deeply about empirical performance, and know how to design, run, and debug large-scale training experiments on distributed GPU clusters

  • You’ve developed vision encoders or integrated them into language pretraining pipelines with autoregressive or generative objectives

  • You have experience working with large-scale video or document datasets, understand the unique challenges they pose, and can manage massive datasets effectively

  • You’ve built tools for data deduplication, image-text alignment, or vision tokenizer development

What You'll Actually Do:
  • Investigate and prototype new model architectures that optimize inference speed, including on edge devices

  • Lead or contribute to ablation studies and benchmark evaluations that inform architecture and data decisions

  • Build and maintain evaluation suites for multimodal performance across a range of public and internal tasks

  • Collaborate with the data and infrastructure teams to build scalable pipelines for ingesting and preprocessing large vision-language datasets

  • Work with the infrastructure team to optimize model training across large-scale GPU clusters

  • Contribute to publications, internal research documents, and thought leadership within the team and the broader ML community

  • Collaborate with the applied research and business teams on client-specific use cases

What You'll Gain:
  • A front-row seat in building some of the most capable Vision Language Models

  • Access to world-class infrastructure, a fast-moving research team, and deep collaboration across ML, systems, and product

  • The opportunity to shape multimodal foundation model research with both scientific rigor and real-world impact

About Liquid AI

Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.

Top Skills

Deepspeed
Fsdp
Machine Learning
Megatron-Lm
PyTorch
Vision Language Models
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
Cambridge, Massachusetts
50 Employees
Year Founded: 2023

What We Do

Our mission is to build capable and efficient general-purpose AI systems at every scale

Similar Jobs

Bose Logo Bose

Product Owner

Automotive • eCommerce • Hardware • Music • Retail • Software • Wearables
Hybrid
Framingham, MA, USA
2900 Employees
110K-151K Annually

Tapestry - Coach and Kate Spade Logo Tapestry - Coach and Kate Spade

Coach Summer Store Leadership Intern-Boston, Massachusetts

eCommerce • Fashion • Other • Retail • Sales • Wearables • Design
Hybrid
Boston, MA, USA
16000 Employees
20-20 Hourly

Kensho Technologies Logo Kensho Technologies

Senior Security Engineer

Artificial Intelligence • Fintech • Machine Learning • Natural Language Processing • Software • Generative AI
Hybrid
Cambridge, MA, USA
100 Employees
160K-190K Annually

Bose Logo Bose

Automotive Audio Tools Intern

Automotive • eCommerce • Hardware • Music • Retail • Software • Wearables
Hybrid
Framingham, MA, USA
2900 Employees
29-37 Hourly

Similar Companies Hiring

Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account