‘AI Brains’ Are Coming for Blue Collar Work — Are We Ready?

Humanoid robots are about to level up with “AI brains” — AI systems powered by advanced models that can learn to adapt to new situations. They’re still a work in progress, but they could sing that the skilled trades aren’t so AI-proof after all.

Written by Matthew Urwin
Published on Feb. 23, 2026
A human brain split in two, with half being a human brain and the other half being mechanical parts.
Image: Shutterstock
REVIEWED BY
Ellen Glover | Feb 23, 2026
Summary: Humanoid robots are getting smarter with “AI brains,” or AI systems that use advanced AI models to learn from their experiences and adapt to new tasks. Several startups are pushing the boundaries of these brains, raising questions around just how long blue-collar work can remain AI-proof.

While artificial intelligence has begun to reshape the job market, its effects have been felt mainly by white-collar tech workers. In response, some aspiring professionals have turned to the skilled trades to pursue careers that are more resistant to AI-driven automation. Blue-collar roles may soon be under siege as well, though, with the rise of so-called “AI brains” — advanced AI models that can supplement their training data with information gathered through real-world interactions, allowing them to better adapt to new situations.

AI Brains, Explained

AI brains refer to the AI systems integrated into humanoid robots that control their hardware to coordinate movement. At the heart of these systems are foundation models, which are pre-trained on massive amounts of data and can be tailored to specific applications. These models can also gather real-time data through trials and simulations, generally applying what they learn to unfamiliar tasks and scenarios in the real world.

Not only are these models enhancing tools like AI agents — systems that can complete complex, multi-step tasks on their own — but they could also serve as the metaphorical and literal brains driving the actions of humanoid robots and other machines. As more robotics companies continue to explore the possibilities of these models, workers may need to start considering a future where not even manual labor is entirely safe from the impacts of AI.

More on AI Impacting JobsIs Your Job AI-Proof? What to Know About AI Taking Over Jobs.

 

What Exactly Is an “AI Brain”? 

In a robotics context, the concept of an AI brain seems simple enough. According to Boris Yangel, Head of AI at robotics startup Humanoid, the “body” of a robot refers to its hardware components, including its legs (or wheels), joints and actuators, while the “brain” of the robot refers to the AI system that is integrated into this hardware and controls these parts to orchestrate physical movements. 

“The AI brain is designed to replicate some of the core functions of a human brain,” Yangel told Built In. “It perceives the environment, interprets what is happening and translates that understanding into actions based on the task at hand. It then sends commands to the body, such as moving an arm, picking up an object, placing it somewhere, or stopping.”

However, this brain doesn’t exactly resemble a human brain with its networks of neurons sending electrical signals throughout the body. It’s actually a type of AI model known as a foundation model, which is pre-trained on massive volumes of data and can be fine-tuned for specific use cases. Among the types of foundation models available, a popular choice is visual-language-action (VLA) models, which can learn from visual data, comprehend human language and autonomously decide on a course of action

But robotics companies don’t rely just on pre-training to fuel the performance of foundation models. Instead, they employ a technique known as in-context learning, where a robot attempts to complete a task, and that trial is baked into a natural language prompt for the next trial. In addition to real-world data, companies may run simulations to generate synthetic data, which enables robots to essentially learn from experience, remembering past mistakes and improving with each attempt. 

Supplementing pre-training with real-world interactions shifts the focus from basic memorization to more generalized adaptation. Companies are then designing foundation models that are flexible enough to not only master tasks or scenarios they never saw during training, but also operate various types of physical forms, coordinate the actions of multiple robots simultaneously and potentially lay the groundwork for artificial general intelligence — AI that can think and learn like humans.  

 

Players to Watch in This Space 

Although AI brains may share some common characteristics, different companies have their own processes for developing and deploying them. Here’s what to know about several leaders in this niche and their approaches to building the models that will become the brains behind the next generation of robots. 

Skild AI

Based in Pittsburgh, Skild AI raked in another $1.4 billion in its latest funding round, pushing its valuation past $14 billion. Alongside its successful series C funding, Skild announced the “industry’s first unified robotics foundation model” called the Skild Brain. It was pre-trained on data from simulations and internet videos, and it can continue to collect data post-training while being controlled remotely or learning on its own in the real world. As a result, the Skild Brain can operate different types of robots without prior training. 

This development is the culmination of Skild’s efforts to create what it calls the “omni-bodied brain,” which involved training its model on more than 100,000 robotic forms through simulations. The model can then adapt to a range of scenarios it never saw during training, such as operating a robot after its legs are broken or its wheels are jammed. 

Physical Intelligence

Physical Intelligence is still in its infancy, but it’s picking up steam quickly. After raising $600 million near the end of 2025, the company has surpassed $1 billion in funding thanks to investors like OpenAI, Alphabet’s growth fund CapitalG and tech mogul Jeff Bezos. 

According to its website, Physical Intelligence is focused on “bringing general-purpose AI into the physical world.” To achieve this, the company continues to refine a VLA model. The process involves testing the model in diverse settings and compiling data from these demonstrations, as well as gathering data through other means like synthetic data

By training its model on a variety of data sources, Physical Intelligence hopes to design a VLA model that can excel at operating different physical forms to complete various tasks, learning to adapt on the fly in unfamiliar situations. “Think of it like ChatGPT, but for robots,” Sergey Levine, a co-founder of Physical Intelligence, told TechCrunch

FieldAI

Although FieldAI was founded in 2023, the team’s projects date back to 2016, when they built AI-powered robots to support NASA research. This experience has earned the confidence of investors like Bill Gates, Jeff Bezos and Nvidia, allowing the startup to raise enough funds to hit a $2 billion valuation back in August 2025. 

FieldAI is currently working on foundation models that rely less on large training data sets and more on learning from experience. They’re still trained on multimodal inputs like visual information, written text and LiDAR, and possess a general awareness of safety. However, they collect even more data from real-world deployments, making them dynamic enough to power multiple AI agents or robots that can navigate new situations in complex environments like construction sites and industrial settings

Humanoid

Newcomer Humanoid has already turned heads with its Alpha humanoid robot, which it managed to build in five months and train to walk in only two days. The company ultimately aims to make humanoid robots “commercially scalable,” and it could receive an additional $200 million in funding to reach its goal. 

This speculation comes on the heels of Humanoid announcing KinetIQ, an AI system that can simultaneously operate an entire fleet of humanoid robots possessing different forms and capabilities — all with a single AI model. The system is divided into layers, including two that use agentic AI and an omnimodal language model to collectively manage the robots, process their surroundings and break down instructions into tasks. A VLA neural network then coordinates the movements of each robot, while a base layer supports body control. 

Under this setup, data gathered from one robot can be used to inform the actions of the other robots, helping the entire fleet improve over time. Robots can then learn from one another and work as a coherent team within grocery, retail, manufacturing and logistics environments.

Why It’s Still Too Early to PanicThe Job Market Might Be a Mess, But Don’t Blame AI Just Yet

 

Bringing AI Into the Real World Remains a Challenge     

Equipping robots with AI brains is just the latest effort to achieve physical AI — AI embedded into hardware, enabling machines to perceive and interact with their surroundings. Behind this push is the belief that computers and phones fail to facilitate real-time interactions, limiting AI’s capabilities when integrated into these devices. As a result, AI companies are exploring new forms to solve this problem. 

Besides robots, Tesla, Waymo and Nvidia are investing in autonomous vehicles, and Apple and OpenAI are experimenting with unprecedented AI-first devices. These forms are designed to allow AI models to directly engage with their environments, compiling real-time data to learn from and adapt to new situations. Some startups are going a step further, developing world models that already possess an understanding of how the physical world works. 

However, all these approaches share the same shortcomings. For one, these models require extensive amounts of real-time data, which is not always readily available. Companies may compensate by introducing synthetic data, but models are still susceptible to data errors and hallucinations. On top of these issues, AI brains are being asked to do the seemingly impossible: Mimicking the adaptability that comes naturally to the human brain

“The challenge comes from the fact that biological brains are the result of millions of years of evolution,” Yangel said. “Recreating even a small part of this capability through engineering is extremely difficult, especially when the system must continuously adapt to a dynamic, unpredictable environment.”

More Ways AI Is Reshaping Society5 Ways AI Will Change Your Life in 2026

 

What Do AI Brains Mean for Blue-Collar Workers?

Robotics companies remain undeterred by the challenges surrounding AI brains, especially as their Chinese counterparts begin entering the field to strengthen their grip on the humanoid sector. As the AI race heats up, the question becomes what blue-collar work could look like if AI-powered robots start shouldering the heavy lifting in places like factories, construction sites and warehouses. 

Considering how AI functions in today’s workforce, it’s more likely that human labor will evolve rather than be eliminated altogether. For example, AI agents can complete basic tasks, but they struggle with more complex problems and often produce ineffective outputs without proper guidance. In response, white-collar tech workers could pivot to more strategic roles, managing teams of agents that perform tasks without doing the work themselves. 

Yangel suggests something similar might happen to blue-collar work, too, where humanoid robots take over simple tasks like handling parts along an assembly line, packing items and sorting inventory. Robots directed by AI brains may then require tradespeople to manage them, while producing new roles that mirror the rise of machine learning engineers, AI engineers and AI trainers who help maintain today’s AI models. It’s an uncertain future, but it’s still a ways off from the jobless landscape some fear could be AI’s endgame

“This shift will inevitably change the nature of work. Some roles may be reduced or transformed, and parts of the economy will adapt,” Yangel said. “The deployment of robotic systems at scale will create new types of work. Fleets of robots will need to be installed, supervised, maintained and continuously improved, leading to new technical and operational roles that do not exist today.”

Frequently Asked Questions

An “AI brain” is a system that’s integrated into a humanoid robot and that coordinates the robot’s parts to execute physical movements. AI brains are powered by foundation models, which are pre-trained on immense amounts of data and can be fine-tuned for particular use cases. Robotics companies are also training these models on real-time data through trials and simulations, improving their ability to master tasks they were never trained to perform.

Physical AI refers to a category of artificial intelligence that involves embedding AI into physical hardware, instilling in machines the ability to perceive and engage with their environments. Approaches to achieve physical AI include autonomous vehicles, AI-first devices and now robots powered by AI brains.

Rather than completely eliminate blue-collar jobs, AI brains and the robots they power could change the nature of these roles. As humanoid robots take on basic chores like sorting and lifting objects, roles may become more strategic, where workers make decisions and guide robots that perform tasks accordingly. AI-powered robots may even create brand-new blue-collar positions that involve building, maintaining and supervising these robots.

Explore Job Matches.