ChatGPT Is Relaxing Its Rules Around Sexual Content

Soon, ChatGPT will allow sexually explicit conversations with verified adults — a move OpenAI says is simply about “treating adult users like adults.” But with competition heating up, this is likely also a play for engagement.

Written by Brooke Becher
Published on Dec. 17, 2025
Pixel robot with a heart inside a text bubble
Image: Shutterstock
REVIEWED BY
Ellen Glover | Dec 15, 2025
Summary: Soon, OpenAI will allow verified adults to have sexual conversations with ChatGPT. The move marks a shift from safety‑focused principles to a more engagement-driven strategy, raising concerns about ethics and user well‑being.

Our relationship with artificial intelligence is getting more personal. What began as a tool for productivity and creativity is increasingly being marketed as a companion — one that listens, offers advice and even fulfills sexual or romantic desires. Across the internet, millions of users are turning to chatbots not just for answers to basic questions, but for affection. From role-playing apps to digital girlfriends and boyfriends, AI-powered intimacy is fast becoming one of the technology’s most popular use cases.   

Now, one of the biggest players in the industry is entering the space: OpenAI. Once known for its principled approach to AI safety, the company will be debuting “adult mode” in ChatGPT in early 2026, according to its CEO of Applications, Fidji Simo.

The idea was first floated by CEO Sam Altman in October 2025, when he posted on X that the chatbot will be able to produce “erotica,” essentially allowing it to engage in sexually explicit conversations with verified adults. After receiving some backlash, Altman defended the move as part of a broader effort to “allow more user freedom,” adding that OpenAI is “not the elected moral police of the world.”

What Exactly Is ChatGPT’s New Erotica Feature?

This isn’t so much a new feature as it is a rollback of restrictions for verified adult users. OpenAI is relaxing some of its safety guardrails, allowing ChatGPT to engage in a wider range of sexual or explicit conversations with adults, rather than blocking them outright. In practice, this expands what the chatbot is permitted to respond to, while also keeping stricter protections for minors and maintaining moderation systems to flag harm or user distress.

Critics argue that this is nothing more than a cheap play for engagement — “emotional commodification,” as one expert described it to Wired. Indeed, as competition in the generative AI industry tightens and investors push for growth, facilitating sexual interactions may prove to be one of the most powerful user-retention tools of all. Others, however, see the use of AI companions as a harmless way to provide comfort and emotional support to lonely users, and the next logical step in personalized entertainment.

Either way, OpenAI’s new direction reveals a larger shift away from pure functionality and toward emotional entanglement. It is leveraging one of the oldest, most irresistible forces on Earth — desire — to capture attention and expand its influence. For a company who, to this day, claims its mission is to advance artificial intelligence in ways that “benefit humanity as a whole,” the optics are especially striking.

Related ReadingOpenAI Is on a Hot Streak. But How Long Will It Last?

 

The Rising Demand for AI Companionship 

Demand for AI companionship is growing fast. There are hundreds of sites out there that offer chatbots to simulate sexual intimacy — customized to fulfill whatever fantasy a user can imagine. In an interview with Bloomberg Tech, Laura Kunz, CEO of Pandorabots — the platform that inspired Spike Jonze’s cyber-romance film Her — estimated that about 30 percent of all user interactions with chatbots end up being romantic or sexual in nature. Over the decades, she’s witnessed how people can anthropomorphize machines and form emotional attachments to them that deepen into dependency.

“Sexualized use cases are really going to have a negative impact on our mental health, our interpersonal relationships far beyond what we’ve seen with pornography and social media,” Kunz warned. That’s why her company, which offers more than 300,000 AI companions, has deliberately avoided sexualized chatbots, citing the technology’s outsized appeal among younger users and tendency to encourage antisocial or dangerous behavior.

Those risks are already surfacing. Online forums are now filled with accounts of users falling victim to so-called “AI psychosis,” a potentially life-altering mental health crisis brought on by obsessive chatbot usage. Children and teens appear to be especially vulnerable, prompting some parents to sue chatbot makers for allegedly mentally or even sexually abusing their kids. What’s more, thousands of people have reportedly used conversational AI to roleplay the sexual assault of minors.

Several major AI companies have begun refining their safety and content policies to resolve these issues. OpenAI specifically has strengthened its moderation systems — including new classifiers to detect signs of acute distress, improved filters to flag harmful material and separate policies for minors and adults. Altman says these systems are what has made it possible for the company to “safely relax” some restrictions and allow adult users to engage in more sexually explicit conversations with ChatGPT.

Whether these protections will actually be able to prevent the psychological and social fallout Kunz predicts remains to be seen.

Related ReadingAge Verification Is Taking Over the Internet — But at What Cost?

 

Why Is OpenAI Doing This?

Altman says allowing ChatGPT to engage in sexualized conversations is simply about “treating adult users like adults.” But odds are this wasn’t a philosophical decision so much as a financial one.

Despite having a valuation of $500 billion, OpenAI’s annual revenue is only about $13 billion — and it has committed to spending a whopping $1 trillion over the next five years. With more than $57 billion in investor funding and mounting pressure to finally demonstrate profitability, even a controversial engagement booster like this one can look like an attractive short-term fix. Plus, competition is heating up fast. In fact, in an internal memo to employees earlier this month, Altman reportedly said the company was declaring a “code red” effort to boost the quality of ChatGPT and fend off rivals, including improving personalization, increasing speed and reliability and allowing it to answer a wider range of questions.

If anything, the sudden plot twist of major tech companies whoring out their chatbots might be the clearest sign yet that the $1.5 trillion AI bubble is beginning to strain. The industry is awash in hype and inflated valuations, fueled by a closed circuit of increasingly large financing deals and TED Talk-style product launches. But is the technology itself really delivering? Xun Wang, CTO at Bloomreach, warned that the expectations around AI far outstrip what it can really deliver. And a recent MIT study found that 95 percent of generative AI projects fail to drive meaningful revenue. If you’re already hundreds of billions of dollars in the red, adding more adult‑oriented features becomes an all-too-easy way to boost engagement without actually providing any real productivity.

While sexually explicit AI features will likely draw users in fast, they also raise questions about how far companies are willing to stretch their original missions in pursuit of financial success. After all, OpenAI isn’t the first big name to make this shift. xAI, a company founded by Elon Musk to “understand the true nature of the universe,” rolled out anime-style companions earlier this year that can engage in flirty banter with users — and will even remove articles of clothing when prompted.

There appears to be a growing tension between profitability and moral principles, especially in an environment where regulations remain limited and oversight is slow to catch up. Even as capital continues to flow into the industry, it’s becoming less clear whether the technology will fulfill its early promises of solving humanity’s biggest challenges or simply devolve into yet another form of mindless entertainment. Regardless, the boundary between innovation and commercialization is narrowing, and how companies choose to navigate that divide will likely shape the public’s trust in AI for years to come.

Frequently Asked Questions

Only verified adult users can engage in sexually explicit conversations in ChatGPT. OpenAI says these relaxed restrictions apply exclusively to adults, while accounts linked to minors remain subject to stricter content limits, along with moderation systems designed to block harmful or exploitative material.

Yes — OpenAI collects and stores all ChatGPT conversations, including sexually explicit ones, according to its privacy policy. However, the company says sensitive content is subject to additional protections, and users can limit the data that gets collected by tweaking their settings.

OpenAI uses AI-powered age prediction software that analyzes signals like word choice, topics and activity times to infer a user’s age. If an account appears to belong to someone under 18 — or the user self-identifies as a minor — stricter content limits are applied. If an adult user is misclassified as a minor, they can verify their age by uploading a photo of their government ID as well as a selfie to restore full access.

Explore Job Matches.