OpenAI’s $6.5 billion purchase of Jony Ive’s io earlier in the year was largely framed by industry observers as a design coup. In reality, it was something more telling: an admission that the real choke point for AI isn’t algorithms or computing power, but the devices this technology still has to live in.
Modern computing centers mainly on laptops and smartphones, which are increasingly becoming relics of a point-and-click era. They don’t operate well with autonomous systems that can see and act on the world in real time. Instead, they work best with apps, menus and manual inputs.
The industry continues to focus on infrastructure breakthroughs, frontier research and even larger LLMs, however. It’s as if scaling model size alone will somehow produce the leap forward everyone is waiting for. But the more elusive — and arguably more important — question is what form factor will enable AI to fully integrate into people’s daily lives.
This is the hidden bottleneck holding back AI’s progress. If it’s not fixed, the next generation of intelligence will remain trapped in the wrong body.
This puzzle feels quite similar to the years before the iPhone. Few people could predict, even shortly after its launch, that this glass-and-metal slab would redefine what a phone was and how billions of people interacted with the internet. This is why it’s especially exciting that Jony Ive, the very designer who helped usher in that previous transformation, is now working with Sam Altman to solve AI’s embodiment problem.
What Is the AI ‘Body’ Problem?
AI’s potential is hindered by outdated devices like smartphones and laptops, which are ill-suited for real-time interaction. To fully integrate AI into daily life, new hardware designed for embodied intelligence is essential, enabling seamless, context-aware interactions and edge computing.
Running AI on Old Devices Is a Structural Problem
Consider AI models like GPT-5 and Gemini, which already have sensory integration, can converse fluently and reason in more than one way. Making them accessible only through a screen reduces their functionality to that of another chatbot window or productivity plug-in, however.
OpenAI’s most recent purchase, especially when you think about its earlier investment in Humane’s AI Pin, is probably a sign that these companies know they can’t scale AI adoption without new physical vessels.
Although it doesn’t have a screen, the Humane product still needs vocal commands and motion to work and still struggles with hallucination and contextual awareness. Changing the way an input is delivered to AI doesn’t matter much if developers are bent on forcing probabilistic intelligence through deterministic input channels.
This is because AI thinks in probabilities. It guesses and adapts based on context, not fixed answers. So, if developers keep forcing AI to work only through those rigid channels, it can’t show its full intelligence, no matter how advanced the models are. These interfaces require AI to conform to human speed, effectively undermining its ability to anticipate, act proactively or smoothly incorporate input from the environment.
To have true fluidity, hardware must be built from the ground up not just to run commands when needed but to have a constant, low-friction presence at all times. In some cases, this means integrating decentralized AI (DeAI) frameworks directly into the hardware, allowing intelligence to operate and evolve at the edge without relying solely on centralized servers.
The market needs interfaces that let AI see, understand and assist people contextually, without requiring user initiation. This is how the technology successfully moves beyond the request-response tyranny dominating the space today.
Embodied Intelligence Is the Way Forward
Some experts believe that the bottleneck is still in the models and that a better GPT or a fully realized AGI will make form factor irrelevant. But that point of view doesn’t take into account the physical capabilities of hardware, their speed and their ability to mesh with human interaction. Hence, they believe that better AI models will solve everything, while they forget that hardware — the devices they use every day — matters a lot. Even the smartest AI is useless if the device it lives in is too slow or clunky to interact naturally with people.
Legacy gadgets alone couldn’t make an AI that can really be ambient, meaning it can see, hear and reply without a person crouched over a screen. This absence feels increasingly jarring, given how interactions with LLMs are already edging into conversational flows. Still, the screen and keyboard remain, like ancient intermediaries, constantly disrupting the rhythm of our interactions. If the most advanced model in the world can’t sense its surroundings or act on them smoothly, it’s essentially meaningless.
So, instead of consumer gadgets, the answer to the present quandary is embracing embodied intelligence. Figure AI’s humanoids, which have been shown doing logistics work without interruption, are a good example of this shift: advanced sensor fusion, spatial understanding and self-moving machines that interact fluidly with the real world.
We need the hardware to match AI’s strength, and this so-called “physical AI” must combine action and awareness in real time, freeing intelligence from being locked up behind screens. In a decentralized AI context, these embodied systems could even learn and share knowledge peer-to-peer, bypassing bottlenecks that come from routing all intelligence through a handful of cloud operators.
Intelligence flourishes when embedded within environments that it can sense and manipulate directly, untethered from passive interfaces.
The AI Hardware Race Has Economic Stakes
This shift is also crucial for AI companies. The biggest firms in the AI hardware industry will still sell the devices, but they’ll also run the operating systems, APIs and app platforms.
That’s why the io agreement is important for more than just industrial design. This strategic action may stop iOS and Android, two competing ecosystems that have every reason to keep control, from arbitrarily mediating AI.
The deal also means that the capital markets will have to move things around. Investors will be more interested in AI-native device businesses, sensor makers and the decentralized infrastructure networks (DePIN) that will enable them. Nvidia’s GPU monopoly is just the first act. The next one is about who makes the “nervous systems” for AI bodies, edge computing, secure local storage and real-time connection.
DeAI could play a key role here by distributing compute and decision-making across networks of devices, making these systems more resilient and less dependent on a single point of failure.
Legacy OEMs have to deal with a tougher reality. If they keep making small improvements to smartphones, they might lose their place as the main way to get to computing. The AI interface could take the place of mobile incumbents in the same way that PC titans have lost ground in the mobile era.
We Must Decide What AI’s Physical Future Will Be
The situation described above is similar to the evolution of blockchain. In the early days of crypto, concerns over scaling focused on how much data a protocol could handle. But true usefulness depended on wallets, payment cards and easy-to-use front ends, which comprise the “hardware” of the user experience.
The AI sector today is in danger of making the same mistake, i.e., thinking that improving models alone will lead to widespread use. AI will stay a back-end utility and not an integrated part of daily life unless and until there are purpose-built devices.
Markets should also be ready for changes. As AI-native technology becomes more common among consumers, there will be dramatic changes in value. Semiconductor vendors that are linked to mobile could lose ground, while specialized component makers, LiDAR sensors, haptics and low-power inference chips could see their prices rise.
The IPO pipeline for AI device businesses will also most probably open, but not all of them will make it. The failure rate will be significant, but the prize for the winners will be huge.
The brain of AI has already changed; now, the ecosystem requires a body that can keep up with it. If the industry continues trying to put tomorrow's intelligence into yesterday's machines, progress will remain slow.