Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.
We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.
We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.
About the RoleThe next leap in model intelligence won’t come from scale alone — it will come from better post-training and alignment. Cartesia’s Post-Training team is developing the methods and systems that make multimodal models truly adaptive, aligned, and grounded in human intent.
As a Researcher on the Post-Training team, you’ll work at the intersection of machine learning research, alignment, and infrastructure, designing new techniques for preference optimization, model evaluation, and feedback-driven learning. You’ll explore how feedback signals can guide models to reason more effectively across modalities, and you’ll build the infrastructure to measure and improve these behaviors at scale.
Your work will directly shape how Cartesia’s foundation models learn, improve, and ultimately connect with people.
Your ImpactOwn research initiatives to improve the alignment and capabilities of multimodal models
Develop new post-training methods and evaluation frameworks to measure model improvement
Partner closely with research, product, and platform teams to define best practices for creating specialized models
Implement, debug, and scale experimental systems to ensure reliability and reproducibility across training runs
Translate research findings into production-ready systems that enhance model reasoning, consistency, and human alignment
Deep knowledge of preference optimization and alignment methods, including RLHF and related approaches
Experience designing evaluations and metrics for generative or multimodal models
Strong engineering and debugging skills, with experience building or scaling complex ML systems
Ability to trace and diagnose complex behaviors in model performance across the training and evaluation pipeline
Experience with multimodal model training (e.g., text, audio, or vision-language models)
Contributions to alignment research or open-source projects related to model evaluation or fine-tuning
Background in designing or implementing human-in-the-loop evaluation systems
🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together, and learning from each other every day.
🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.
🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.
Top Skills
What We Do
Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Try Sonic at https://play.cartesia.ai and join our Discord at https://discord.com/invite/gAbbHgdyQM.








