Billion-dollar funding rounds, dramatic personnel moves and larger-than-life promises are not all that uncommon among artificial intelligence startups, but few companies have attracted more intrigue and speculation than Thinking Machines Lab.
What Is Thinking Machines Lab?
Thinking Machines Lab is an AI startup co-founded by former OpenAI CTO Mira Murati and other OpenAI alumni. After raising $2 billion in its first five months, the company released its first product, Tinker, which helps developers fine-tune models without worrying about the cost or complexities of distributed computing.
Once shrouded in secrecy, the startup has started to define itself in recent months through its extensive research and the launch of its first product, Tinker, which helps developers fine-tune AI models for specific tasks or industries. While other labs race to build bigger models that demand more data and computing power, Thinking Machines has chosen a different tact: building smarter models using more efficient post-training techniques.
The Making of Thinking Machines Lab
Thinking Machines Lab is an AI research and product company led by Mira Murati, who previously served as the chief technology officer at OpenAI, the maker of ChatGPT. When OpenAI’s board of directors fired CEO Sam Altman in November 2023, Murati briefly served as interim CEO until Altman was reinstated days later. She and the company’s chief scientist at the time, Ilya Sutskever, had raised concerns about Altman to the board, but both later signed a petition calling for the board to resign and Altman to be reinstated. Murati left OpenAI about a year later, saying she wanted to explore other opportunities.
In February 2025, she cofounded Thinking Machines Lab with other researchers from OpenAI, including John Schulman, Barrett Zoph, Lilian Weng, Andrew Tulloch and Luke Metz. Altogether, Murati assembled a 30-person team, including top developers and researchers from Meta, Mistral and other AI labs.
Often described as mysterious and enigmatic, Thinking Machines Lab became the subject of intense speculation throughout 2025. With a growing roster of the industry’s top researchers, the company had no trouble raising funding. Within its first five months, it scored a $2 billion seed round at a $12 billion valuation — one of the largest seed rounds in Silicon Valley history. The company is a public benefit corporation, like OpenAI and competitor Anthropic, which created Claude. But unlike those other startups, Murati has voting powers that outweigh the rest of the board of directors, giving her an unusual amount of control over the direction of the company.
Thinking Machines also captured the attention of Meta CEO Mark Zuckerberg, who was playing catch-up after getting a late start in the AI arms race. When Murati declined Zuckerberg’s offer to acquire her company, he attempted to poach her employees. He was particularly keen on luring away Andrew Tulloch, who worked at Meta and OpenAI before co-founding Thinking Machines. After initially turning down a six-year contract worth up to $1.5 billion, Tulloch eventually agreed to join Meta in October 2025. The following month, Soumith Chintala, the co-creator of open-source AI framework PyTorch, left Meta to join Thinking Machines Lab.
Thinking Machines Lab’s Research
Thinking Machines was founded with a promise to build multimodal AI systems that work with people collaboratively, and that can be adapted and customized to every field of work.
“Knowledge of how [frontier AI] systems are trained is concentrated within the top research labs, limiting both the public discourse on AI and people’s abilities to use AI effectively,” the company's website states. “And, despite their potential, these systems remain difficult for people to customize to their specific needs and values.”
Thinking Machines has also pledged to share its research and collaborate with others in the AI community. The company has published research that explains how language model inference can produce more consistent outputs by redesigning GPU kernels for batch invariance, how modular manifolds can optimize neural network performance and the benefits of on-policy distillation as a post-training method.
The startup’s researchers have also extensively studied Low-Rank Adaptation (LoRA), an efficient approach to fine-tuning AI models introduced in 2021. Instead of retraining all of the parameters of a model — which is often time- and cost-prohibitive given the compute required — LoRA freezes the weights of a model, adapts them with a new set of parameters and trains this much smaller adapter version of the model. In its study, Thinking Machines found LoRA, when configured correctly, can rival the performance of full fine-tuning when training with small- to medium-sized datasets. Now, it relies on LoRA in its first product, Tinker, which helps researchers and developers fine-tune AI models.
What Is Tinker?
In October 2025, Thinking Machines released its first product, Tinker, which streamlines the customization of AI models for specific tasks or fields of study. Users can modify a range of open source models, including Meta’s Llama, Alibab’s Qwen, OpenAI’s gpt-oss models, DeepSeek V3.1 and Moonshot AI’s Kimi K2 Thinking.
Frontier models can assist users with all sorts of things, but they often lack the specialized knowledge required for specific domains of work. For example, a medical researcher would need a custom-trained model to analyze the patterns of rare diseases, or a legal team might benefit from a model trained on industry-specific regulations. Tinker allows researchers and developers to customize open source frontier models for their own purposes without having to deal with the cost and complexity of distributed training and infrastructure management.
While Tinker does make fine-tuning more accessible, it still requires a significant amount of machine learning expertise. After choosing a base model to train on, users can tap Tinker’s flexible API to fine-tune the AI model through supervised learning, which relies on labeled data, or reinforcement learning, which teaches algorithms to learn through trial and error.
Tinker has already been used by researchers at Princeton University, Stanford University and the University of California, Berkeley.
What’s Next for Thinking Machines Lab?
With the release of its first product, Thinking Machines Lab is starting to carve out its niche within the highly competitive AI sector. And as it closes in on its one-year anniversary, the company is setting lofty goals for 2026. As of November 2025, Thinking Machines is reportedly seeking an additional $5 billion in funding at a valuation of $50 billion.
John Schulman, the company’s chief scientist, told Cursor’s podcast that Thinking Machines plans to release its own models in 2026. He also said the team will add multimodal capabilities to Tinker, and that eventually, it will offer tools to help a wider audience with limited technical knowledge fine-tune a model for their own purposes. If that day comes, it would likely be a game-changer for how AI is used in work. It remains to be seen whether Thinking Machines’ bet on customization, accessibility and efficiency will pay off. But whatever direction they take, one thing’s for sure: the entire AI industry will be watching.
Frequently Asked Questions
Who founded Thinking Machines Lab?
Thinking Machines Lab was co-founded in February 2025 by Mira Murati, John Schulman, Barrett Zoph, Lilian Weng, Andrew Tulloch and Luke Metz — all of whom previously worked at OpenAI and other top AI labs.
What is Tinker and what does it do?
Tinker is Thinking Machines Lab’s first product, launched in October 2025. It lets developers fine-tune AI models for specific tasks or industries without the cost and complexity of traditional distributed training.
How much funding has Thinking Machines Lab raised so far?
Thinking Machines Lab raised $2 billion in its first five months, giving it a $12 billion valuation. The startup is reportedly seeking an additional $5 billion at a $50 billion valuation.
