AI Is Getting Physical — and the Law Can’t Keep Up

From interactive toys to smart glasses, a new wave of AI-native devices is blurring the line between hardware and software — and slipping past outdated safety and privacy laws in the process.

Written by Brooke Becher
Published on Nov. 05, 2025
Image of a man with various machine parts and cameras coming out of his head.
Image: Shutterstock
REVIEWED BY
Ellen Glover | Oct 30, 2025
Summary: Artificial intelligence is moving into physical devices that perceive, learn and adapt in real time, but U.S. regulations weren’t designed to handle such hybrid systems. With no single agency to govern them, these AI-enabled products linger in legal limbo and put users at risk.

Artificial intelligence is moving out of screens and into the physical world. It’s taking shape as health-tracking wearables, augmented reality-enabled glasses, voice-powered rings and pendants, talking plush toys and housekeeping bots that roll from room to room. These aren’t just gadgets with smart features, they’re AI-native devices, built to perceive their surroundings, learn from experience and react in real time. Over time, these intelligent machines begin to recognize patterns in how people move, speak and behave, fine-tuning their responses to feel more natural. That learning makes them seem almost alive — able to anticipate users’ needs or mirror their emotions in ways software alone never could.

Their constant awareness, however, is raising new concerns around privacy, safety and the subtle ways these devices have been shown to influence human behavior. Existing regulations in the United States were never designed for always-on machines that constantly watch, listen and react. Now, we’re left with a patchwork of largely outdated laws, with no unified framework for how AI-first devices should operate or be held accountable.

How Are AI-first Devices Being Regulated?

There are few laws written specifically for AI-first devices. In the United States, oversight is typically split between agencies like the FTC, which polices digital practices, and the CPSC, which ensures product safety. But because AI-first devices blur the line between software and hardware, no single agency has clear authority. The result is a patchwork of outdated rules that overlook how these systems collect data, shape behavior and evolve. By contrast, the European Union is moving faster with its risk-based AI Act, which imposes stricter rules on products that could cause harm, such as educational or children’s devices. The U.S. has no equivalent, leaving most AI-enabled gadgets largely unregulated.

We’ve seen this before with the Internet of Things. When smart home products and connected toys began flooding the market a decade ago, privacy and security regulations failed to keep up. The Federal Trade Commission has fined dozens of companies for leaving their devices vulnerable to hacks — such as VTech’s 2018 settlement over its data-leaking smart toys — but regulators can do very little about the broader risks of constant data collection inside homes. Even now, Amazon’s Ring and Alexa devices have faced repeated privacy complaints and lawsuits for recording users (and sometimes their neighbors) without consent. 

The same fragmented oversight that failed to rein in IoT devices is now repeating with artificial intelligence, only this time the technology can remember the most intimate details of your life and make decisions on its own as it grows smarter by the minute. 

Now, as the race to create the “next iPhone” of AI-first devices continues to intensify, innovation is moving faster than the laws meant to contain it. Companies are quietly setting the rules for how people interact with AI — long before regulators can weigh in — leaving users, and anyone nearby, to face the risks on their own.

 

The Rise of AI-Native Devices

The migration of AI into physical products isn’t just a design trend — it’s a response to a growing mismatch between what artificial intelligence can do and the hardware it’s trapped inside. Laptops and smartphones were built for the touch-and-type era, but models like OpenAI’s GPT-5 and Google’s Gemini process information in entirely different ways. They operate multimodally, probabilistically and in real time, reasoning across text, sound, visuals and movement. Yet they’re confined to screens and menu-based interfaces that throttle their potential. Designers and engineers are calling this the AI body problem, where the technology’s “mind” has outgrown its “body.”

That gap is driving a multi-billion-dollar wave of investment in hardware built specifically for embodied intelligence. And for the first time, the technology has reached an inflection point that makes it possible to build AI-first devices that can perceive, act and adapt without requiring users to constantly tap, swipe or prompt it to activate. Edge computing can now keep data local, reducing lag and enabling instant responses without draining batteries or relying on the cloud. Specialized AI chips can handle complex neural network calculations efficiently enough to fit inside consumer devices at accessible prices. Meanwhile, sensor fusion — a computational technique that combines data from cameras, microphones, IMUs and touch sensors — gives these machines a continuous sense of the world around them. Together, advances are turning AI from a cloud-based service into an adaptive, physical presence that can listen, remember and respond naturally.

But really, users are becoming fed up with what current devices can offer. AI has evolved beyond the point where it can be folded in through software updates. The average smartphone is quite powerful, but it’s fundamentally reactive. 

“People crave personalized, always-on companions.”

“People crave personalized, always-on companions,” Neil Sahota, an AI advisor to the United Nations and the CEO of thinktank ACSILabs, told Built In. When people interact with AI, they want the interface to feel natural — to anticipate intent, assist ambiently and fade into the background when it’s not needed. That vision spans everything from toys that adapt to a child’s mood to smart glasses that translate foreign languages on the fly. “People trust things they can touch.”

That shift in expectation is what’s pushing companies to rethink the entire user interface, moving away from typical tap-and-type screens toward more contextual, sensory, voice-based interactions that feel less like commanding a machine and more like working alongside one.

We’re already seeing that future take shape. OpenAI’s $6.5 billion acquisition of former Apple designer Jony Ive’s io studio reportedly aims to build a screenless, pocket-sized, device that gathers information from its surroundings with built-in cameras and microphones. Meta’s Orion AR glasses pair with an electromyography wristband that reads subtle nerve signals, providing haptic feedback as well as eye tracking and micro-LED projection for a hands-free experience. Amazon’s Bee bracelet records conversations, summarizes key points and syncs relevant details to a user’s calendar. Meanwhile, Google’s Gemini language models are being woven into everything from Android XR glasses to Nest speakers. Even kids’ toys are getting the AI treatment: Mattel is incorporating it into its Barbie and Hot Wheels series so they can talk, adapt and learn through play.

Each of these prototypes is an attempt to give AI a body that’s better suited for both the technology’s capabilities and its users’ expectations. Collectively, they offer an early glimpse into how people may one day live and work alongside AI-native machines.

Related ReadingSolving the AI ‘Body’ Problem Is Crucial to Unleashing Its Power

 

The Merging of Hardware and Intelligence

AI is changing how we think about machines. For decades, it was added as an afterthought — slipped in through software updates or cloud connections, rather than being built into the product itself. Today, that dynamic has flipped. AI now shapes how devices are designed from the ground up, requiring sensors, cameras, microphones and processors that allow them to continuously perceive their surroundings and adapt. 

And as AI becomes more and more woven into the hardware itself, the distinction between a device’s physical components and its digital intelligence is beginning to blur. The result is a new class of machines that “think” through their circuitry, where their hardware and software are no longer separate layers but all part of the same system. 

“The hardware landscape has fundamentally shifted,” Ravitez Dondeti, engineering manager at Creston Electronics and independent AI developer, told Built In. 

When Dondeti began building his first context-aware apps in the mid-2010s, on-device processing was out of the question due to severe CPU and RAM constraints, forcing the hand to offload heavy computation to cloud servers. Modern chips have changed that. Since semiconductors have shrunk to their smallest scales yet, pocket-sized devices can now run AI models locally, delivering faster responses while protecting user data and reducing dependence on cloud infrastructure. The evolution from desktops to laptops to smartphones and wearables once meant offloading more work to the cloud, but today, AI can live inside the hardware itself. 

“We’re moving toward a future where AI handles most processing locally, and only encrypted, anonymized data moves to the cloud when absolutely necessary,” Dondeti said. “The challenge is that the smaller the device, the more tempting it becomes to offload to the cloud.”

Related ReadingOpenAI Appears to Be Entering Its Robotics Era. Here’s What We Know So Far.

 

The Regulatory Gap Around AI-First Devices

Current regulations in the United States were not designed to handle technologies that are both physical and intelligent. In fact, there are very few AI-specific laws at all, so most existing guidelines are adaptations of older laws meant for traditional hardware or software — not systems that merge the two.

As public policy scholar Matt Steinberg points out in a Tech Policy Press op-ed, most AI-embedded products don’t fit neatly into existing regulatory categories. These “emotionally responsive, behavior-shaping, memory-retaining hybrids” are “part software, part hardware and part something entirely new.” Yet today’s rules still treat products, services and software as separate categories. The Consumer Product Safety Commission (CPSC) oversees physical hazards like sharp edges or choking risks; the Federal Trade Commission (FTC) regulates deceptive or unfair digital practices; and platform rules govern apps distributed through stores. 

But AI-native devices — gadgets that watch what you do, listen to what you say non-stop, remember your preferences and respond in ways that feel almost human — straddles all three categories. No single agency has the authority or framework to fully police how these systems behave, collect data and influence people, leaving AI-enabled hardware in a regulatory Wild West. “The more physical AI becomes,” Steinberg wrote, “the more it slips through the frameworks we’ve tried to build.”

“The more physical AI becomes, the more it slips through the frameworks we’ve tried to build.”

The gaps are probably most visible when you look at AI-powered toys. The Children’s Online Privacy Protection Act (COPPA), written in 1998, was intended for websites that collect data from kids under the age of 13, not for toys that can hold conversations or form emotional bonds. When a smart doll like Mattel’s AI Barbie learns a child’s behaviors, remembers their preferences and adapts over time, COPPA doesn’t say anything about what that doll can say back — or how it might shape a child’s feelings. Congress has proposed a COPPA 2.0 to raise the protection age to 16 and require data deletion options, but even that wouldn’t fully address the psychological risks some AI companions pose.

Privacy protections for adults aren’t much stronger. If your smart glasses record a random person standing nearby, the legal outcome depends entirely on what state you’re in: About a dozen require two-party consent for recordings, while others don’t. That means a device could be perfectly legal in Texas but break the law in California. State laws like Illinois’ Biometric Information Privacy Act (BIPA) provide some safeguards for facial and voice data, but they don’t cover the more subtle “memory” and inference data AI systems now collect — things like emotional tone, habits or social patterns that models infer without explicit consent.

Even when regulators do step in, their reach is limited. The FTC can fine companies for misleading consumers about the data their devices collect, as it did with Chinese toymaker Apitor, but it can’t regulate how those toys actually talk to children. At this stage, the CPSC can recall a toy for overheating, but not for encouraging unhealthy attachment or social dependence. Traditional product testing checks for toxic materials or electrical faults — not if a device is capable of emotionally manipulating users with biased or otherwise harmful responses.

Meanwhile, the underlying technology keeps racing ahead. Edge computing and new protocols like Anthropic’s Model Context Protocol allow AI systems to retain memory, connect to external tools and act across devices. That means your smart watch could eventually function like a full-fledged AI sidekick agent that remembers past interactions and takes actions autonomously on your behalf. These advances certainly make AI-first devices more capable, but they also push them even further outside the reach of any single regulatory framework.

Related ReadingHow to Navigate AI Regulations to Balance Innovation and Compliance

 

What Would Responsible Regulation of AI-First Devices Look Like?

There’s no clear label that fully captures what an AI-first device is — it’s a product, a service and a constantly learning behavioral agent all at once. What exists now is a patchwork of rules that punish data misuse but leave deeper psychological and social harms unaddressed. “We need a framework that creates lifecycle accountability, from idea to retirement” Sahota suggested. 

Under such a framework, post-sale updates would be treated as new product releases rather than invisible patches, and each device would receive a dynamic risk rating, allowing continuous re-certification as its underlying intelligence evolves. Oversight could be managed by a board — or joint teams such as the FTC and CPSC together — that is qualified to examine both physical product safety and AI behavior at the same time, while keeping privacy protections consistent nationwide.

Some regions are moving quickly in this arena. The European Union’s AI Act takes a risk-based approach, classifying AI systems by the potential harm they could cause and requiring extra oversight for high-risk applications, such as educational or children’s products. The law puts the onus on providers to deliver a safe service, sharing a small portion of that burden with deployers and other commercial entities part of the supply chain. For example, the disgraced doll My Friend Cayla was banned in Germany after regulators deemed it an "illegal espionage apparatus” for voice-recording and data-transmitting capabilities. Unsurprisingly though, the United States has no equivalent. State laws like California’s CPRA and federal rules such as COPPA focus on data collection and privacy, but they don’t address the behavioral or emotional effects of adaptive, memory-enabled AI.

In terms of built-in safeguards, regulators could require clearer disclosure labels that spell out what data is being collected and how it will be used, memory-reset options and algorithmic transparency for devices that interact conversationally, Dondeti suggested. AI-embedded products should also prioritize processing sensitive data locally whenever possible, securing it with end-to-end encryption both in storage and in transit and defaulting settings to opt-in consent, so users have to explicitly agree before participating in data collection (currently it’s the reverse).

Other possible guardrails, Sahota added, might include modes that can verify a user’s age automatically, along with an adaptive risk kill-switch that immediately shuts down or resets the device if it starts behaving unsafely. Devices could also include an explainability layer, allowing users to simply ask, “Why did you say or do that?”

“Truthfully, each AI device needs a black box recorder, a secure log that tracks what it sensed, what decisions it made and how it changed, so anyone can review it at any time,” he said. “It’s the only way to audit.”

Until such measures exist, AI-enabled products will continue to evolve faster than the rules meant to govern them, leaving users — especially children and bystanders  — exposed to the risks that current laws do not consider. After all, these devices aren’t just collecting our data, they’re influencing our behavior in ways that can shape our emotions, decisions and even social interactions. Without a proper regulatory framework in place, all of the responsibility falls on the users, who may not even realize just how powerful these machines already are — or how quickly they’re advancing.

Frequently Asked Questions

AI-first (or AI-native) devices are gadgets that integrate and leverage artificial intelligence from the ground up, using AI to process data and enable more intuitive, hands-free human-computer interactions without relying on traditional input methods like touchscreens. Instead, they rely on things like sensors, cameras and microphones in order to observe the environment around them, anticipate users’ needs and respond to them in real time. Examples include AI-powered wearables, smart glasses, interactive toys and autonomous home robots.

The first physical device to implement AI was Shakey, a robot developed at SRI International between 1966 and 1972. Shakey could reason about its actions, navigate its environment and complete tasks without step-by-step instructions.

Because they’re always on — listening, watching, and learning — AI-first devices can gather intimate personal data, from voice and facial expressions to emotions and habits. That creates risks of surveillance, manipulation or data misuse. Even bystanders may be recorded or analyzed without consent. Some experts also warn of emotional dependency on these devices, particularly those that offer a sense of companionship and are marketed to children.

Explore Job Matches.