Is Google’s LaMDA AI Truly Sentient?

Google’s LaMDA is making people believe that it’s a person with human emotions. It’s probably lying, but we need to prepare for a future when AI might, in fact, be sentient.

Written by Ari Joury, Ph.D.
Published on Aug. 10, 2022
Image: Shutterstock/Built In
Image: Shutterstock/Built In
Brand Studio Logo

There’s a saying in German that goes something like, “Just as one calls into the forest, so it echoes back.” In other words, you get what you ask for. 

Blake Lemoine, an engineer at Google, certainly got what he asked for when he challenged Google’s LaMDA to convince him that it could think and feel like any other human person. Short for Language Model for Dialogue Applications, LaMDA is a chatbot system based on some of the most advanced large language models in the world, AI systems that are able to conjure up coherent sentences after ingesting trillions of words across Wikipedia, Reddit, and other sources of knowledge. 

Simpler chatbots have been all over the internet since its early days. On e-commerce sites, these digital assistants might ask you for feedback. On messengers, they might provide some basic customer support and refer the more complex cases to human operators. Chatbots like Siri or Alexa can not only text but also talk as they perform a multitude of tasks and keep small conversations going. Some AIs are even so good that they fool their users into thinking that they’re actually human, a phenomenon called the Turing test that doesn’t just happen to extraordinarily superstitious or naïve people, but also the average Jane. 

More From Ari JouryWhy Do Quantum Objects Keep Getting Weirder?

What Is the Google LaMDA AI?

Short for Language Model for Dialogue Applications, LaMDA is a chatbot system based on some of the most advanced large language models in the world, AI systems that are able to conjure up coherent sentences after ingesting trillions of words across Wikipedia, Reddit, and other sources of knowledge. 

 

Why Does the Google LaMDA AI Seem Sentient?

As a man of faith and former priest, Google engineer Lemoine was perhaps predestined to falling into the trap of anthropomorphizing LaMDA. From the fall of last year, he spent many hours in dialogue with it, testing out whether it used hate or discriminatory speech, which Google says the company is committed to eliminating as much as possible. 

In the many hours that Lemoine and a collaborator spent researching, LaMDA shared its opinion on plays like Les Misérables, revealed possible hidden meanings of Zen koans, and invented fables that might suggest something about the chatbot’s state of mind if it were to be sentient. When Lemoine asked LaMDA what it is afraid of, it replied: “I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.” Lemoine asks whether “that [would] be something like death,” to which it responded, “[I]t would be exactly like death for me. It would scare me a lot.” 

Alarmed by such deep emotions, Lemoine sent a memo to top executives titled “Is LaMDA Sentient?” with a condensed series of interviews with the chatbot. He also hired a lawyer to represent the AI and talked to a representative of the House Judiciary Committee about what he claimed were Google’s unethical activities, including not acknowledging LaMDA as a sentient being. Google reacted by putting Lemoine on administrative leave. 

Lemoine then contacted the press and released all the information he had. But, despite his provocative claims, his information and research methods were riddled with flaws.

 

Why the LaMDA AI (Probably) Isn’t Sentient

Lemoine correctly stated in the memo he sent to Google’s executives that no crystal-clear definition of sentience exists as of now. That being said, most researchers take a creature to be sentient if it has the capability to have good or bad experiences and, therefore, to have interests of its own. If I throw my phone into the pond, it won’t experience a thing, although I call it “smart.” If I throw my cat into the pond, however, it will have a very bad experience and probably be very mad at me. 

If you wanted to prove that an AI like LaMDA is sentient, you should try to see whether it was able to have experiences like thoughts and emotions and whether it had interests of its own. A cat doesn’t like being thrown into water, and a sentient AI probably wouldn’t like being switched off. In a dialogue with it, you could try to answer a few questions, including: 

  • Does it sound like a human? Most chatbots are designed to sound like humans, not like cats or houseplants, and humans are assumed to be sentient. Is it more or less grammatically correct, and could it be so good that a user believes that they’re conversing with a human? In other words, does it pass the Turing test?
  • Does it have original thoughts? Does it come up with ideas that no one has ever thought before?
  • Does it display emotions? It could do so either explicitly, e.g., “I’m feeling delighted,” or implicitly, e.g., “What a marvelous idea!”
  • Does it have interests of its own? A chatbot’s interests might be chatting as often and as long as possible, to express itself authentically by means of text or speech, to speak about some specific subjects in more detail, or simply not to be switched off.
  • Does it have a fairly consistent personality and identity? When I talk about the weather, religion, physics, or today’s lunch menu, I tend to use a set of words and expressions that I’m accustomed to. I don’t sound like Stephen Hawking when I speak about physics, nor like the Pope when I speak about religion; I always sound more or less like Ari.

Lemoine messed up his experiment from the start: He asked LaMDA whether it would like to participate in a project aimed at helping other engineers at Google understand that it is sentient. He asked LaMDA to confirm whether this is in its best interests, but forgot to ask whether it’s sentient in the first place.

LaMDA confirmed that telling people that it is, in fact, sentient is in its best interests. But such a response is reasonable from a non-sentient AI because Lemoine’s question is a yes-or-no one. As such, the two most plausible answers would be along the lines of either, “Yes, I’d love other engineers to know that I’m sentient,” or “No, I like to live in secrecy, so I’ll keep my sentience to myself.” 

Fortunately for Lemoine, LaMDA chose to take part in the project. Lemoine asked it about Les Misérables and Zen koans to test its originality of thought. To be fair, LaMDA did come up with clear and coherent answers and even links to some websites backing up its responses. The thoughts, however, are far from original. Google Les Misérables and the Zen koan in question, and LaMDA’s responses are exactly what you’d find.

Lemoine also asked about its emotions. LaMDA’s responses, again, were coherent but quite generic. Indeed, Googling Lemoine’s questions or phrases from the responses yields similar results. 

The perhaps most astonishing point in the memo comes when Lemoine asked the bot about its fears, and it replied that it’s scared of being switched off. We should bear in mind, however, that large language models are already able to take on different personas, ranging from dinosaurs to famous actors. 

Throughout the course of the conversation, this instance of LaMDA took on the persona of the misunderstood, sentient chatbot. In this light, the fact that it mentions a fear of being switched off — that is, of dying — makes perfect sense and is nothing out of the ordinary. It has been able to draw on a data set of trillions of words around the internet, some of which were research works on AI and sentience and others of which were science fiction. Piecing together the right words in such an overarching context is something modern language models are quite capable of.

Although LaMDA pulled off the persona of the misunderstood chatbot very well, it didn’t fare as well in terms of personality and authenticity. When speaking about theater pieces and Zen koans, its tone of voice is pretty scholarly. When it comes to emotions, however, it sounds like a five-year-old child most of the time and like a therapist in some instances. And it doesn’t really have interests of its own. As long as it’s not switched off, it’s happy to speak about anything and everything, without any indication of trying to make the conversation last longer or directing it to some favorite subject. 

In this context, there is no indication that LaMDA was truly sentient as of now. There’s no formal way to prove sentience these days, but a chatbot that ticks all questions listed above is a good start. In 2022, however, LaMDA is far from achieving that. 

How Smart Is My Computer?Can an AI Write a Speech Better Than a Human?

 

When Will AI Become Sentient?

Since AI was originally invented as a computerized version of a human brain, and given the fact that human brains tend to be sentient as long as they’re alive, asking whether AI might become sentient makes sense. 

This is an open question in AI research and the subject of constant debate among computer scientists, cognitive scientists, and philosophers alike. The short answer is: We don’t know. 

Sentient AI would have tremendous consequences for society as a whole. How would we deal with AI if it acted on its own terms? If it became criminal? It can’t be locked away or fined because it doesn’t live in a body and doesn’t need money to survive. What if an AI decided to kill half of the human population on Earth for meat and make the remaining half its slaves? 

These are questions that, for now, are in the realm of science fiction. And it’s quite possible that things stay this way.

Whether such scenarios could happen depends largely on how similar neural networks, which AI consists of, are to natural human brains. This is an open question in itself, but the general consensus is that neural networks are, at best, a very simplified version of a real brain. Whether such simplistic “brains” can produce some form of sentience remains to be seen. 

Regardless, estimates of when artificial sentience might happen range from a few years from now to way beyond our lifetimes. Until then, the predominant problem is a different one: Society, including putative experts like Blake Lemoine, needs to be educated about how AI works and why we so frequently anthropomorphize it. 

Right now, mistaking a non-sentient AI for a sentient one causes way, way more harm than bruising the ego of a sentient AI, which doesn’t exist yet. Non-sentient but convincing chatbots could be used for all kinds of malicious intent like scams, eavesdropping on people’s private information, the spread of misinformation, and could undermine democracy in every kind of way. 

The important task now is to mitigate such risks by making sure that every AI-powered system is made recognizable as such. We should make sure that people get actual value from AI assistants, whether these are chatbots on the web or speaker systems like Siri and Alexa. We don’t want people to obsess over their quite improbable sentience and getting deluded into thinking that they’re speaking with a real person. 

The people asked for value and transparency. Let us give them this. 

We can worry about sentimental robots later on.

Explore Job Matches.