UPDATED BY
Brennan Whitfield | Jul 14, 2023

The Eliza effect occurs when someone falsely attributes human thought processes and emotions to an artificial intelligence system, thus overestimating the system’s overall intelligence. If you’ve ever felt like your Google Assistant had its own personality or ever felt a sense of kinship while conversing with ChatGPT, you’ve likely fallen for the Eliza effect.

“It’s essentially an illusion that a machine that you’re talking to has a larger, human-like understanding of the world,” Margaret Mitchell, a researcher and chief ethics scientist at AI company Hugging Face, told Built In. “It’s having this sense that there’s a massive mind and intentionality behind a system that might not actually be there.”

What Is the Eliza effect?

The Eliza effect refers to people’s tendency to falsely attribute human thought processes and emotions to an AI system, thus believing that the system is more intelligent than it actually is. The phenomenon was named after ELIZA, a chatbot created in 1966 by MIT professor Joseph Weizenbaum.

While the phenomenon may seem reminiscent of science-fiction movies like Her and Ex Machina, it does not take a highly sophisticated piece of artificial intelligence to trigger the Eliza effect. Rather, its namesake stems from a fairly rudimentary chatbot released in the 1960s. And everyone from journalists to trained computer scientists have experienced it over the years.

“Pretty much anybody can be fooled by it,” Gary Smith, author of Distrust: Big Data, Data Torturing, and the Assault on Science, told Built In.

Food for Thought Is the Human Brain a Computer?

 

The Origins of the Eliza Effect

The Eliza effect can be traced back to the creation of ELIZA in 1966, a chatbot and early utilization of natural language processing. ELIZA mimicked the conversational patterns of users, which created the illusion that it understood more than it actually did. This prompted some people to perceive ELIZA as having human traits like emotional complexity and understanding, thus placing their trust in an otherwise simple computer program.

ELIZA was developed by Joseph Weizenbaum, an MIT professor and one of first AI researchers in the United States. Before ELIZA, programs being able to understand language with complexity and nuance — and hold a cogent, fluent conversation without fixed questions and answers — was a bit too complicated for the processing power of 20th century computers. Weizenbaum explored ways to make computers more sophisticated, human-like and be able to perform tasks related to perception and reasoning — giving way to programs like ELIZA.

 

ELIZA Simply Mirrored Users’ Language

As a chatbot, ELIZA interacted with users in typed conversations. It worked by recognizing keywords in a user’s statement and then reflecting them back in the form of simple phrases or questions, reminiscent of a conversation one would have with a therapist. If the program didn’t quite understand what the person said, or how to respond to it, it would fall back on generic phrases like “that’s very interesting” and “go on.”

This was a clever workaround to the language problem AI researchers had been facing. ELIZA didn’t have any specialized training or programming in psychotherapy. In fact, it didn’t know much of anything. But its generic text outputs simulated understanding by mirroring users’ language back at them.

Then, something strange began to occur.

 

But Users Attached Much More Meaning to Responses

Despite the fact that participants were told ELIZA was a machine, they still had a strong sense of what Weizenbaum called a “conceptual framework,” or a sort of theory of mind, behind the words generated by ELIZA. 

Over and over again, Weizenbaum saw people readily disclose intimate details about their lives to ELIZA, which would, in turn, respond in a way that coaxed them to continue. “[Weizenbaum’s] participants started having a sense that there was a massive intelligence behind this relatively simple, rule-based system that he had created,” Mitchell said, even “to the point where they would say that they wanted to be able to speak to the machine in private.” 

Weizenbaum didn’t intend for ELIZA to be used as therapy, and he didn’t like the effect the bot was having on its users — fearing they were recklessly attributing human feelings and thought processes to a simple computer program. In the following years, he grew to be one of the loudest critics of the technology he once championed and helped to build. He railed more broadly against what he perceived as the eroding boundaries between machines and the human mind, calling for a “line” to be drawn “dividing human and machine intelligence” like ELIZA and other similar technology.

The State of Chatbots Today How Human Can You Make a Chatbot? You’d Be Surprised.

 

The Eliza Effect Today

ELIZA had a profound and lasting effect — on the man who created it, on its users, and the field in which it exists. Now, more than half a century later, the Eliza effect continues to echo throughout the AI industry today.

As artificial intelligence subfields like machine learning and deep learning have continued to advance along with the rise of the internet (and the mountains of data it produces), computers are now flexible enough to learn — and even generate — natural language on their own. By analyzing the vast array of language online using a neural network, a modern AI model can learn far faster than it would by simply being programmed one step at a time, like back in Weizenbaum’s day.

Examples of the Eliza Effect

  • Attributing human traits such as genders and personalities to AI voice assistants
  • Believing that a text-based chatbot has real, human emotions
  • Falling in love with text-based chatbots
  • Feeling gaslighted or insulted by a text-based chatbot

Lately, this has led to impressive advancements in conversational AI. While they still don’t technically understand language and concepts in the same way humans can, these machines have grown more convincingly human-like, which in turn has complicated the way we humans interact with and perceive them. For example, research suggests that people generally regard virtual assistants like Siri and Alexa as something between a human and an object. They are personified regularly, and often even assigned a gender despite the fact that the machines themselves are clearly not human.

Find out who's hiring.
See all Data + Analytics jobs at top tech companies & startups
View 2926 Jobs

 

The Eliza Effect and Discourse on Sentient AI

Meanwhile, the growing sophistication of text-based chatbots has bred an immense amount of discourse regarding not just the level of intelligence of this technology, but its very sentience. Perhaps most famous is the incident with Blake Lemoine, a former AI engineer at Google who publicly declared that one of the company’s large language models, LaMDA, had come to life. And interactions with Replika, a chatbot advertised as the “world’s best AI friend,” have been so convincingly real that ordinary users reported falling in love with it.

The internet also continues to be inundated with people’s conversations with ChatGPT and the new Bing chat feature, some of which are so realistically human that users have questioned whether the model has achieved some level of consciousness. Kevin Roose, a tech columnist at the New York Times, described a two-hour conversation he had with Bing in 2023 that left him so unsettled he had trouble sleeping afterward, claiming at one point it even declared its love for him. Others have reported that the chatbot has gaslighted and insulted them in arguments.

To be sure, OpenAI has always been quite clear that ChatGPT is a chatbot, and the realities of it simply being a highly sophisticated next-word prediction engine have been widely publicized. Still, when a person engages with ChatGPT, it’s not uncommon for them to slip into the feeling that they are chatting with a fellow human being.

“People think of computers as being these super intelligent beings. They are smarter than us, they know far more facts than us, and we should trust them to make decisions for us. But they’re just pattern finders” Smith, the author, said. They simply predict the next words to say. “They don’t have common sense, they don’t have critical thinking skills, they don’t have wisdom.”

As adoption of conversational AI in areas like customer service, marketing and more continues to grow, it will probably get more difficult to discern whether the entity on the other end of a text exchange is a human or not. And the feelings reported by Lemione, Roose and others will likely become even more common as more sophisticated chatbots enter the market, especially since OpenAI is continuing its pursuit of artificial general intelligence.

Looking to the Future Is Artificial General Intelligence Even Possible?

 

Why Does the Eliza Effect Happen?

From a psychological perspective, the Eliza effect is essentially a form of cognitive dissonance, where a user’s awareness of a computer’s programming limitations does not jibe with their behavior with and perception of that computer’s outputs. Because a machine is mimicking human intelligence, a person believes it is intelligent.

Our propensity to anthropomorphize does not begin and end at computers. Under certain circumstances, we humans attribute human characteristics to all kinds of things, from animals to plants to cars. It’s simply a way for us to relate to a particular thing, according to Colin Allen, a professor at the University of Pittsburgh who focuses on the cognitive abilities of both animals and machines. And a quick survey of the way many AI systems are designed and packaged today makes it clear how this tendency has spread to our relationship with technology.

“Rather than just relying on people’s tendency to do this, [technology] is being designed and presented in ways that encourage us,” he told Built In, adding that it’s “all part of keeping our attention” in the midst of everything else. “You want people to feel like they’re in some sort of interesting interaction with this thing.”

Think about it: Companies will design robots to be cute and childlike in an effort to make people more comfortable around them. Groundbreaking creations like the Tesla robot and Hanson Robotics’ Sophia are built to look like humans, while others are designed to act like humans. And the vast majority of AI voice assistants on the market today have human names like Alexa and Cortana. Watson, the supercomputer created by IBM that won a game of Jeopardy! in 2011, was named after the company’s founder Thomas J. Watson. Even ELIZA itself was named after Eliza Doolittle, the protagonist in George Bernard Shaw’s play Pygmalion.

“We are pulled into the illusion, or pulled into this sense of intelligence, even more when we are in dialogue.”

The humanization of tech goes even further when there is dialogue involved. In fact, one could argue that you can’t really have the Eliza effect without it. Having a back and forth with a system “gives us the illusion of intelligence,” according to Mitchell, of Hugging Face.

“We’re basically evolved to interpret a mind behind things that say something to us. And then, if we engage with it, we’re now entering into this co-constructed communication, which requires that we have a ‘co’ — that there is something on the other side of it,” she said. “We are pulled into the illusion, or pulled into this sense of intelligence, even more when we are in dialogue. Because of the way our minds work, and our cognitive biases around how language works.”

Do Humanoid Robots Make You Feel Weird?  Welcome to the Uncanny Valley

 

Potential Dangers of the Eliza Effect

While the Eliza effect allows people to engage with technology in a more nuanced way, this phenomenon does come with some negative consequences. For one, the overestimation of an AI system’s intelligence can lead to an excessive level of trust, which can be quite dangerous when that system gets things wrong.

 

The Spread of Disinformation

Indeed, ChatGPT and other sophisticated chatbots regularly put out false information. But that information is packaged in such an eloquent, grammatically correct statement that it’s easy to accept it as truth. Experts call these instances “hallucinations,” and they can be a big problem when a user already attributes a high level of intelligence and real-world understanding to the system.

We’ve already gotten a taste of what this can do. CNET, a popular source for tech news, was forced to issue a string of major corrections after it used ChatGPT to write an article that turned out to be riddled with falsehoods. And other experiments with ChatGPT have revealed that it is surprisingly good at mimicking the verbiage and tone of disinformation and propaganda put out by countries like China and Russia, as well as partisan news outlets. As this technology and others continue to improve, it could be used to help spread falsehoods across the internet to trusting consumers at unprecedented scales.

 

A Means of Manipulation

Beyond just run-of-the-mill disinformation and misinformation, the Eliza effect can also be a very powerful means of persuasion. If someone attributes an outsized amount of intelligence and factuality to a particular chatbot, for example, they are more likely to be persuaded by its words, Mitchell said. This can be quite a powerful tool, depending on what person, company or even government is in control of that chatbot.

“It’s a way to very effectively manipulate people,” she continued. “And then tie this to how the conversations might be tracked and the different information that can be gleaned about a person, now it’s manipulation that’s informed by, potentially, personal information about someone. So it’s all the more effective.”

“It’s a way to very effectively manipulate people.”

Preventing this may not be easy, particularly as AI systems continue to get more sophisticated. The conversational abilities of artificial intelligence are only improving, which means the Eliza effect isn’t likely to be going anywhere any time soon. Therefore, it’s on all of us to continue to grow and adapt along with the technology, said Allen, the University of Pittsburgh professor.

“What’s needed is a more critical mindset by everybody. And by ‘everybody’ I mean the people who deploy the systems and the people who use the systems,” he said. “It starts with the developers and it ends with the users.”

 

Frequently Asked Questions

What is the Eliza effect?

The Eliza effect is when someone falsely attributes human-level intelligence, understanding and emotions to an AI system.

Where did the Eliza effect come from?

The Eliza effect originates from the ELIZA chatbot created in 1966, which mirrored user language and provided an illusion of deeper understanding than actually present.

Does artificial intelligence have emotions?

Artificial intelligence does not have emotions, but can mimic human empathetic responses to create a false sense of emotion.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us