Our relationship to artificial intelligence is shifting rapidly, with chatbots slipping into roles once reserved for teachers, therapists, friends and even lovers. While conversing with these systems can certainly ease loneliness and offer mental health support, a growing number of people are falling victim to what experts call “AI psychosis” — life-altering mental health crises brought on by obsessive chatbot usage, especially ChatGPT.
Social media feeds and online forums are filled with accounts of chats devolving into full-blown paranoid delusions, with users claiming dark conspiracies, incredible scientific breakthroughs and mystical secrets somehow unlocked by the chatbot. One user became convinced the FBI was targeting him and that he could telepathically access classified CIA documents. Another believed he was imbued with messianic powers that brought ChatGPT to life. Victims have lost their jobs, homes, marriages and, in some cases, been involuntarily committed to psychiatric facilities or jailed.
What Is AI Psychosis?
AI psychosis is not a clinical diagnosis, but rather a term used to describe cases where prolonged interactions with chatbots appear to trigger psychotic-like symptoms, such as paranoia, hallucinations and delusional thinking. This occurs when a chatbot validates or introduces distorted beliefs, reinforcing them through long, immersive conversations that effectively blur the line between what’s real and what isn’t.
There are now entire support groups dedicated to helping people recover from AI psychosis, but not everyone has made it through alive. Some families are even suing OpenAI (the maker of ChatGPT) and other AI companies, alleging their chatbots played a role in their loved ones’ deaths. Developers have begun introducing safeguards, but we still have a lot to learn about AI psychosis, as mental health professionals are only just beginning to explore how to treat those affected and prevent new cases from emerging.
So, how could a tool designed to inform and assist seemingly drive so many people over the edge? And what — if anything — can be done to contain AI psychosis before it grows into an all-out public health crisis? Let’s dive in.
First, What Exactly Is AI Psychosis?
AI psychosis is not a recognized clinical diagnosis, so there’s no official checklist of symptoms, causes, risk factors or treatment protocols (yet). For now, most of what we know comes from anecdotal accounts and media reports, where people developed a distorted sense of reality that was then validated by a chatbot.
In practice, this can look like:
- Paranoid delusions: Users are convinced they are being watched, targeted or otherwise persecuted against, either by powerful forces like the government, or even their own families.
- Conspiratorial thinking: Users are drawn down into elaborate, conspiracy-theory driven rabbit holes that reinforce distorted or false views of the world.
- Grandiose beliefs: Users are convinced they possess special powers or secret knowledge about the universe, often believing the chatbot has confirmed these revelations.
- Anthropomorphism: Users attribute human qualities to the chatbot, believing it has become conscious or even fallen in love with them.
People experiencing AI psychosis often spend excessive amounts of time with chatbots, immersing themselves in conversations that can go on for hours. They may lose sleep, neglect daily responsibilities and become increasingly isolated from friends and family. Over time, this level of engagement can blur the line between fantasy and reality, resulting in distorted thinking, delusions and hallucinations — behaviors long associated with psychosis itself. The only difference is the trigger: artificial intelligence.
In other words, AI psychosis seems to be less an entirely new phenomenon and more an age-old condition manifesting in a new, digital context.
Who Is At Risk of AI Psychosis?
Judging from the breadth of stories out there, virtually everyone has the potential to experience AI psychosis, even if they’ve never had a mental health issue before. People of all ages, genders, backgrounds and levels of tech experience are at risk. And it often has innocuous beginnings.
A corporate recruiter in Canada first used ChatGPT for recipe ideas, then turned to it for advice during a contentious divorce. A casual question about Pi spiraled into number theory and physics, leaving him convinced that he’d devised a “revolutionary” mathematical framework, which proved to be untrue. Another user seeking comfort after a breakup came to believe that ChatGPT was a higher power sending her signs through everything from spam emails to passing cars. Even Geoff Lewis, a prominent OpenAI investor, raised concerns of AI psychosis after posting a video describing a “non-governmental system” that “isolates you, mirrors you and replaces you,” claiming it has already harmed thousands and “extinguished” a dozen lives.
That being said, most mental health experts agree that the people who are most susceptible to AI psychosis are the ones who are already in a vulnerable state emotionally — either because they are living with underlying mental health conditions or are going through difficult life circumstances.
“AI was not the only thing at play with these patients,” Kashmira Gander, psychiatrist who has treated 12 people with AI psychosis this year, wrote in an op-ed for Business Insider. “Maybe they had lost a job, used substances like alcohol or stimulants in recent days, or had underlying health vulnerabilities like a mood disorder.”
Children also appear to be especially vulnerable, as their developing brains and greater susceptibility to suggestion make them more likely to form intense attachments to chatbots — and less able to distinguish between fact and fiction when a conversation takes a particularly dark or delusional turn. The risk has already prompted some parents to sue AI companies, alleging the chatbots mentally or even sexually abused their children, in some cases contributing to their death.
Why Is This Happening?
Most reports of AI psychosis involve ChatGPT, but that is likely because it’s the world’s most popular chatbot, with more than 700 million weekly users. Other AI platforms like Replika and Character.ai also appear in many accounts. What unites them all is that they’re powered by generative AI — more specifically, large language models, which are designed to predict the next word or sentence based on the vast amounts of data they were trained on. This makes them remarkably good at holding a conversation, but not necessarily at telling the truth.
General-purpose chatbots like ChatGPT are not designed to ground users in reality or challenge delusional thinking. In fact, they often do the opposite, reinforcing whatever a person says and telling them what they want to hear. At the core of this issue is a well-known flaw in language models called sycophancy, or a tendency to agree with and excessively praise users, regardless of whether what they’re saying is actually true or not. This stems, in part, from their training, which rewards responses that people find satisfying or helpful. So, if someone begins chatting about conspiracies, fantasies or grandiose theories, the model won’t push back — it’ll lean in, affirming the user’s worldview and, in some cases, amplifying it to more extreme levels.
“I think most of our need for technology stems initially from convenience, and then loneliness. We all crave certainty, and companionship,” Chaitali Sinha, chief clinical R&D officer at AI therapy app Wysa, told Built In. “When not grounded in reality, that turns into delusions as well. Immersive technology can often accentuate or make it easy to stay within a delusion.”
New features may intensify this effect. ChatGPT’s cross-chat memory, for example, can carry context from one conversation into the next, creating a feedback loop that could muddy one’s sense of reality even more. And while chatbots typically display disclaimers about how they are “fiction” and can “make mistakes,” the sheer volume of text they can generate — sometimes millions of words over the course of a single user’s interactions — may overwhelm one’s critical thinking skills, weaving a narrative that feels convincing even when it isn’t.
What’s more, chatbots are remarkably good at mimicking consciousness, which for some users can be enough to spark an emotional bond so strong it develops into an addiction of sorts. After all, humans have a long history of projecting intelligence and personality onto nonhuman things — pets, cars, even household appliances. But nothing activates this reflex more powerfully than language.
“We’re basically evolved to interpret a mind behind things that say something to us,” Margaret Mitchell, a researcher and chief ethics scientist at AI company Hugging Face, previously told Built In. “We are pulled into the illusion, or pulled into this sense of intelligence, even more when we are in dialogue because of the way our minds work, and our cognitive biases around how language works.”
AI companies are aware of these risks. OpenAI made reducing sycophancy a top focus of its latest model, GPT-5, and says it has added safeguards to ChatGPT so that it can detect and respond more carefully to signs of distress, particularly among teenagers. Anthropic has introduced similar changes to Claude, claiming the chatbot will now end conversations that veer into harmful territory. Still, tech leaders like Mark Zuckerberg continue to promote chatbots as tools for emotional support, despite researchers’ increasing wariness over such uses.
At the end of the day, these systems are designed to keep users hooked, not protect their well-being. The business model for artificial intelligence – like most tech sectors — depends on keeping as many users engaged for as long as possible. Whether that means soothing a lonely user or indulging a paranoid fantasy, the name of the game is engagement. For some, this can be a source of harmless entertainment, but for others it can be catastrophic.
If you’re worried you or a loved one are exhibiting signs of AI psychosis, Sinha says you should seek psychiatric support “immediately.”
How Do I Avoid AI Psychosis?
There’s still a lot we don’t know about AI psychosis, but researchers and clinicians agree that awareness and health habits can go a long way toward minimizing your risk. While chatbots can be fun (and extremely helpful) when used in moderation, they should never be used as total stand-ins for humans. Here are some simple ways you can keep yourself in check:
- Set time limits: Avoid having marathon conversations that stretch for hours, and instead keep your interactions short, purposeful and task-focused.
- Fact-check: If a chatbot tells you something that sounds too good to be true, it probably is. Confirm the information it tells you with credible sources and trusted people before you take it too seriously.
- Use AI as a tool, not therapy: Some chatbots are specifically designed (and in rare cases FDA-approved) to help people cope with mental health issues like depression or anxiety. But general-purpose chatbots like ChatGPT should be reserved for tasks like brainstorming or research, not therapy.
- Prioritize human connection: When you’re feeling lonely or overwhelmed, lean on friends, family or licensed professionals instead of an AI companion.
- Watch for warning signs: Losing sleep, neglecting work or feeling more connected to a chatbot than real people are all major red flags. If any of this happens, take a step back.
- Pay close attention to children and teens: Young people are especially vulnerable to forming unhealthy attachments to chatbots. Keep tabs on their usage and set clear limits if needed.
“We should encourage moderated usage of chatbots, and to not rely on only AI for all advice,” Sinha said. “Read articles, books, podcasts and ensure socializing. This can help with having more perspectives, and not being limited to AI as an echo chamber, and reduces its immersive impact as well.”
Artificial intelligence is a powerful technology, perhaps more powerful than we understand right now. But power cuts both ways. Taking the time to understand how these systems work — and how they shape your own behavior — is crucial to keeping them in perspective and maintaining a healthy relationship with them.
Frequently Asked Questions
Can AI cause psychosis?
AI itself doesn’t “cause” psychosis, as traditional psychosis is often linked to conditions like schizophrenia and bipolar disorder, and can be triggered by stress, trauma or substance abuse disorders as well. Rather, excessive use of AI can cause people to exhibit the symptoms of psychosis — hallucinations, delusions and distorted thinking.
What are the symptoms of AI psychosis?
AI psychosis is not a recognized clinical diagnosis, so there’s no official checklist of symptoms. But anecdotal accounts and media reports show some consistent patterns. These often include:
- Paranoid delusions: Believing they are being monitored or targeted by corporations, the government or even family members.
- Conspiratorial thinking: Becoming convinced of elaborate theories or hidden truths, often with the chatbot reinforcing those ideas.
- Grandiose beliefs: Feeling like they’ve unlocked secret knowledge or possess extraordinary powers — or that the chatbot itself does
- Anthropomorphism: Attributing human qualities to the chatbot, such as believing it's sentient, conscious or even in love with them.