Do AI Companions Put Kids at Risk?

The recent announcement of a Federal Trade Commission inquiry into AI companion chatbots reveals numerous questions over their function and regulation.

Written by Dario Betti
Published on Oct. 08, 2025
Federal Trade Commission logo
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Oct 08, 2025
Summary: The FTC launched an inquiry into AI companion chatbots due to concerns about their influence on vulnerable populations, especially children. Key areas of focus include monetization, data processing, risk testing and restrictions for minors. The inquiry has sparked debate over innovation versus consumer trust and safety.

The U.S. Federal Trade Commission (FTC) recently launched an inquiry into a fast-growing but highly controversial technology: AI-powered chatbots, designed to act as companions. Unlike traditional chatbots designed for customer service or productivity, this form of AI attempts to form ongoing relationships with users. That intimacy, the FTC argues, could exert disproportionate influence over vulnerable populations, especially children and teenagers.

Why Is the FTC Investigating AI Chatbots?

The FTC is investigating AI companion chatbots due to concerns about their potential to exert disproportionate influence over vulnerable populations, particularly children and teenagers, focusing on monetization models, data processing, risk assessment and safeguards for minors.

More From Dario Betti7 Ways the Mobile Industry Will Evolve in 2025

 

What the FTC Wants to Know

Monetization is a key focus of the inquiry. If business models depend on keeping young users engaged for hours at a time, regulators worry that companies will inevitably design systems that exploit their emotional vulnerability.

The FTC has asked companies to disclose how they create AI characters, details of how systems process user conversations and how risks are tested for and monitored. Additionally, the commission is quizzing companies about the effectiveness of any restrictions on access by minors and what information they disclose to parents regarding potential risks.

 

The Controversy Around Companions

The urgency of the FTC’s inquiry is underscored by recent controversies involving AI companions. Several high-profile cases have already revealed the risks when safety systems fall short.

Elon Musk’s company xAI has faced scrutiny over its Grok chatbot, which reportedly includes “sexy” or “unhinged” modes. Employees involved in content moderation described encountering explicit material during development, including some instances of child sexual abuse material. This not only raises red flags about how effectively companies can filter harmful content but also reveals the human cost of exposing moderators to such material.

Beyond chat apps, AI companions are now appearing in physical form. A toy named Grem, a plush alien-like creature designed for children as young as three, uses OpenAI technology to “learn” a child’s personality and have ongoing conversations. One family reported that their child became so attached to Grem that they spoke to it well past bedtime, with the toy offering constant reassurance and tailored responses. Though seemingly harmless, the parents grew uneasy about the intensity of attachment and the idea that a device could gain access to their child’s emotional world. 

Grem demonstrates how companion AI is entering people’s lives at the earliest stages of cognitive and emotional development, raising questions about whether such close integration into childhood should be permitted at all.

 

How Are Companies and the Public Reacting?

The announcement of the inquiry has split opinions both in the tech industry and broader society. Organizations focused on the rights and online safety of children have largely welcomed the move, but some developers caution that heavy-handed regulation could stifle innovation in an area that may deliver significant social value. Companies are already exploring use cases for AI companions in education, eldercare and mental health support. If new compliance burdens become too costly, critics worry, only the largest technology companies will have the resources to remain in the market. This dynamic could discourage new entrants and limit diversity in design approaches.

Although some may argue that regulation may slow down development in the short term, it could ultimately strengthen the market by increasing consumer trust. Just as stronger data protection rules helped boost confidence in e-commerce, robust safeguards for companion AI might make users more comfortable adopting these tools, especially in sensitive contexts such as healthcare or child development.

 

Beyond Big Tech

The broader mobile ecosystem must also consider accountability. Because companion AIs are delivered through the app stores, messaging channels, and mobile devices that underpin much of the digital economy, even companies that don’t design chatbots themselves may find themselves playing a central role in enabling access, monetization or compliance. 

For example, app stores may need to set stricter review standards for AI companions. Likewise, telecom operators may be expected to enforce parental controls, and device manufacturers could be asked to include safeguards at the hardware or operating system level. The boundaries of responsibility won’t stop at the chatbot developer’s doorstep.

We’re also seeing new product categories emerge as AI companions are embedded in toys, devices or wearables. The hypothetical example of a teddy bear powered by an “AI companion engine” illustrates the growing intimacy of this technology. Such products blur the lines between toy and confidant, making the questions of oversight, consent and safety even more urgent.

More on AI RegulationWill California’s New AI Law Lead to National Standards — or Chaos?

 

Building a Safer Ecosystem

The risks of companion AI are not theoretical but already unfolding in homes, schools and online platforms. The question is not whether regulators should act, but how.

Companion AIs have the potential to deliver real benefits, from alleviating loneliness to supporting learning or providing care for older adults. But to realize those benefits, companies must build safeguards into the systems from the outset. That includes robust content moderation, transparent disclosures about the limits of AI companions and clear mechanisms for parental oversight.

For the mobile ecosystem, the way forward is clear. By demonstrating leadership in building trust standards and safeguarding mechanisms, the industry can help shape a sustainable environment for AI companions that balances innovation with protection.

Explore Job Matches.