Figure AI: What We Know About the Humanoid Robotics Company

This first-of-its-kind robotics company hopes to augment human productivity with its fleet of AI-powered humanoid robots.

Written by Jenny Lyons-Cunha
Published on Nov. 19, 2024
Figure AI’s logo on a screen.
Image: T. Schneider / Shutterstock

Figure AI is a robotics company that develops AI-powered humanoid robots. Its mission is to create the world’s first commercially viable autonomous bipedal robot.

What Is Figure AI? 

Figure AI is an autonomous robotics company that develops AI-powered, bipedal humanoid robots. Figure AI’s humanoids are designed to perform physical tasks in complex environments, like warehouses, factories and in-home settings. 

Figure AI’s vision reaches beyond simple automation; it strives to augment human productivity by creating robots capable of human-level reasoning. Backed by significant investment, Figure AI is working to build robots that operate alongside human teams in multivariate environments. At the heart of this work is Figure 01, the company’s flagship humanoid robot, and its successor, Figure 02, which is designed to transform industries by handling labor-intensive tasks. 

 

What Is Figure AI?

Founded in 2022 by Brett Adcock, Figure AI creates bipedal robots that can mimic human learning and movement to interact with real-world environments. Unlike traditional robots, which are built for repetitive tasks in controlled settings, Figure AI’s humanoids are designed for versatility. The company’s core philosophy centers on building machines that embody the adaptability of a human worker.

According to its website, Figure AI strives to remedy rising labor shortages in the United States by deploying a fleet of autonomous humanoids to support the lacking workforce and fill “jobs that humans don’t want to perform.”

“The world was built for humans,” Adcock said in an interview with TIME Magazine. “If we can create a robot that interacts with it in the same way, we can automate a huge range of tasks.”

Beyond supporting industries like shipping, retail and manufacturing, Figure AI envisions integrating its humanoids into corporate labor, in-home assistance, elderly care roles and even space exploration.

 

What Is Figure AI Building?

Figure AI is developing embodied AI technology and multipurpose robots designed to reason like their human counterparts.

Figure AI

Figure 01

Figure 01 is Figure AI’s first humanoid robot, designed for environments where human labor is in short supply or tasks are physically demanding and dangerous. Standing at 5 feet 8 inches and weighing around 130 pounds, Figure 01 is battery-powered and built to handle tasks like walking, lifting and moving objects and making coffee using a Keurig machine.

Figure 01’s technology includes cameras, LiDAR and tactile sensors. Its movements are driven by a complex system of joints and actuators, which replicate human dexterity by converting energy into mechanical force. Figure 01’s movement mechanisms allow it to leverage fine motor skills and perform repetitive actions without fatigue.

 

Figure AI

Figure 02 

Launched in late 2024, Figure 02 builds upon the capabilities and aesthetics of its predecessor. Weighing nearly 155 pounds, Figure 02 appears more polished, with previously visible wires and battery packs hidden behind metal panels. 

Figure 02 features Figure AI’s fourth-generation hand design, six-camera computer vision system, improved computing power, speech-to-speech reasoning and a 2.25 KWh battery, which doubles the battery life of its forerunner. Figure 02 can carry up to 25 pounds in each hand and operate for up to 10 hours.   

In partnership with BMW, Figure 02 was deployed for testing at one of the automotive company’s factories, where it was able to work nearly around the clock, seven days a week. During its beta run, Figure 02 successfully learned to complete various assembly tasks, like inserting sheet metal parts into pre-set fixtures.

Figure 02 is on the leading edge of integrating end-to-end learning models into humanoid robots, said Brendan Englot, the director of the Stevens Institute for AI Technology.  

“We haven’t seen anyone prompt a humanoid with language the way that Figure AI had Figure 02 perform tasks that incorporate language and vision and action,” Englot told Built In.

 

Figure AI

Embodied Artificial Intelligence 

To bring its fleet of humanoids to life, Figure AI is building a data engine that powers embodied artificial intelligence. Embodied AI is a system that interacts with and learns from its environments using sensors, natural language processing and machine learning.

Figure AI’s neural network model learns in cycles: Using training data, the model powers a fleet of robots, which provide terabytes of data from interactions with their environment. This wealth of data is used to train the neural network, and the cycle repeats indefinitely. 

Technologies like IoT allow robots like Figure 01 and 02 to learn in real time, said Arunkumar Thirunagalingam, a senior manager of data and technical operations at McKesson Corporation: “Cloud computing and edge computing solutions allow robots to make real-time decisions and adapt on the go.”

Figure AI uses a suite of AI systems to animate its robots: 

Speech-to-Speech Reasoning

Powered by its partnership with Open AI, Figure AI’s speech-to-speech reason technology allows its robots to converse with humans around them. When the humanoid receives speech input, such as a greeting, command or question, it is processed as text by Open AI’s model. The model generates a text response, which is converted to a spoken response by the robot.  

Visual-Language Model

Beyond conversation, spoken commands spark behavior selection, powered in part by a visual-language model. VLMs are multimodal AI systems that can simultaneously comprehend speech and visual input like facial expressions, physiological gestures and physical objects. This allows humanoids to perceive the world around them in response to spoken commands.

End-to-End Action Learning

Figure AI uses an end-to-end action model to convert speech and visual input into action. These complex learning models transform multiple modes of data into a learned behavior, which triggers motors that set the humanoid into action. In short, end-to-end action models empower humanoids to hear voice commands and perform tasks they’ve been trained to execute.

 

Figure AI’s Competitors 

There are multiple companies competing with Figure AI in the development of humanoid robots, including Tesla, Boston Dynamics and Agility Robotics, among others.

Tesla

The Tesla Bot, also known as Optimus, is a humanoid robot that Tesla announced in 2021. It is designed to assist with repetitive, unsafe or otherwise mundane tasks in both industrial and household settings. Tesla envisions it as a general-purpose robot that helps with chores, factory work and delicate tasks that require a human touch. Optimus was included in Tesla’s 2024 “We, Robot” demonstration, though it was later revealed to have been remotely controlled by a human operator.  

Boston Dynamics 

Boston Dynamics is a robotics company known for its research and development of humanoid agility and balance, especially in rough or unpredictable terrain. Unveiled in 2013, its bipedal robot Atlas is one of the most capable humanoid robots in the industry. In a 2024 demonstration, Atlas was shown performing tasks in a simulated factory setting without human intervention

Agility Robotics

Agility Robotics develops bipedal robots capable of interacting with human-centric spaces. Founded in 2015 as an arm of Oregon State University, Agility Robotics creates multi-purpose robots that support warehouse logistics, delivery and other tasks where traditional wheeled robots may struggle. Digit is Agility Robotics’ most advanced bipedal robot, designed for logistics work. Digit is being tested and utilized by companies like Amazon and GXO Logistics

 

Challenges of Developing General-Purpose Humanoids  

Creating a Robot Foundational Model

Figure AI is striving to create a general-purpose humanoid that is trained by one AI model and applicable to multiple use cases. Developing a robot foundational model is a universal goal of the humanoid industry, as it would increase the adaptability of humanoid robots. In practice, though, creating a robot foundational model poses significant challenges, which increase with the complexity of humanoid applications. As such, most robotics companies focus on a single niche, such as manufacturing, healthcare or retail. 

Gathering Embodied Data

Unlike models like ChatGPT and Gemini, which are trained on the entirety of the internet, the complexity of humanoid capabilities calls for specialized training content that hasn’t yet been created. 

“There’s a scarcity of data that trains models to do niche tasks,” said Englot, who provided the example of kitchen tasks, which can be trained via a wealth of YouTube videos, versus something like disaster search-and-rescue, of which there is practically no content. 

To train humanoids in controlled environments, developers utilize a simulated training technique called Sim2Real. Sim2Real, in which robots train in digital environments, is an effective way to overcome the data scarcity Englot described. 

To amplify the data generated by Sim2Real training and real-world interactions, Andrea Thomaz, CEO and founder of Diligent Robotics, anticipates that robotics companies will continue to forge collaborations like Figure AI + Open AI, accelerating the progress of embodied AI. In fact, Figure AI is also supported by strategic collaborations with NVIDIA and Microsoft.  

“Very few large databases of embodied data exist,” Thomaz said. “Anyone who wants to build a robot foundation model is going to collect as much data as possible — robotics companies are starting to collaborate to create the biggest and best embodied datasets.” 

Navigating Complex Settings

Building humanoids that can perform well in unstructured settings — like a home, as opposed to a factory assembly line — requires the combined efforts of artificial intelligence, machine learning, sensor technology and mechanical design.

“Current development of humanoids is ongoing in structured environments like warehouses,” Englot said. “Once you advance into disaster robotics or into people’s homes, there will be cluttered, changing environments to navigate and people to interact with.” 

Real-time environmental sensing is critical to a humanoid’s ability to navigate its environment: "Using sensors like [light detection], cameras and ultrasonic devices, robots can create a 3D map of their surroundings, which is crucial for object recognition and collision avoidance,” Thirunagalingam said.  

Mobility and Balance

Another challenge Figure AI faces is maintaining the robot’s stability and balance, especially when moving on uneven surfaces or carrying heavy objects. Creating a walking bipedal humanoid versus a rolling model introduces a significant layer of complexity. 

Companies that have focused on physical capabilities, like Boston Dynamics, have created extremely mobile humanoids. But applying AI technology to such a wide range of physical possibilities comes with challenges, Englot said. 

“It’s challenging to develop a robot that can both act very reliably in the physical world and move autonomously,” he added. 

Safety Concerns

To avoid the dangerous implications of glitches like AI hallucinations in a physical setting, Figure AI will likely need to develop controls for each use case.

“The physical nature of humanoids means there’s an inherent need to think even more about safety,” Thomaz said. “Companies like Figure AI will need to put the guard rails around the robot being created and make sure that it's going to adhere to safety protocols and policies.”

Ethical Considerations

With its vision of a humanoid-augmented workforce, Figure AI must consider the implications of designing robots to replace human workers in certain settings. 

“Companies like Figure AI have a responsibility to think about the appropriate applications for its technology,” Thomaz said. “There is a pretty clear distinction between the tasks in the workplace that can be handled by humanoids and those that allow people to add value.” 

Like Thomaz, Figure AI anticipates the rise of human-robot teams in the workplace rather than an erasure of human jobs

“These robots can eliminate the need for unsafe and undesirable jobs — ultimately allowing us to live happier, more purposeful lives,” wrote Adcock on Figure AI’s website.

Frequently Asked Questions

Figure AI is a robotics company developing general-purpose, bipedal humanoid robots to automate physical tasks in industries like manufacturing, warehousing and retail.
Figure AI was founded by Brett Adcock, an entrepreneur known for co-founding Archer Aviation, an eVTOL company. 

Figure AI, which has partnered with OpenAI, Microsoft and Nvidia, is valued at $2.6 billion.

Figure 01 and Figure 01 are estimated to cost upwards of $150,000 per unit.

Figure AI did not respond to requests for comment.

Explore Job Matches.