Twelve years ago, in 2008, I was giddy with excitement as I unboxed a package from Hong Kong containing a small robot dinosaur. Modeled after a baby camarasaurus, the animatronic toy had green, rubbery skin, a large head, and big eyes. Pleo was the latest and greatest in robot pets and from the moment I heard about it, I was eager to purchase one. The robot could move in a fairly lifelike way, blinking its eyes, craning its long neck, and wagging its tail. It could walk around the room and grab a plastic leaf in its mouth. The little dinosaur had a camera-based vision system in its snout, microphones, infrared detection, and force feedback sensors that let it respond to sound and touch and react to its environment. For example, if Pleo encountered a table edge, it would sense the drop-off and lower its head, pull in its tail, and start backing up, whimpering pitifully.

Created by the (now bankrupt) company Ugobe, Pleo was pitched as a “lifeform” that went through different development stages and had an individual personality that was shaped by its experiences. My upstairs neighbor Killian and I got one at the same time. I named mine Yochai and watched it go from newborn to “child,” petting it a lot to see what that would do compared to Killian’s less spoiled Pleo, Sushi. They both responded to our touch with the same pleasing noises and movements, but they did seem to be different. Once they could walk, Sushi started exploring more, and Yochai would complain when left alone. The programming generally left a lot to our imaginations, and it was fun to watch their behavior and try to guess what the robots were doing and why.

When I showed off the Pleo to my friend Sam, I told him about the built-in tilt sensor that could detect the robot’s positioning in space and urged him to lift the robot up by its tail. “Hold it up and see what it does!” I said excitedly. Sam complied, gingerly grasping the wagging tail and dragging the dinosaur up off the floor. As the robot’s tilt sensor kicked in, we heard the Pleo’s motors whir and watched it twist in its rubber skin. It squirmed and shook its head, eyes bulging. After a second or two, a sad whimper of distress floated out of its open mouth. Sam and I gazed at the machine with fascination, observing its theatrics. The Pleo’s calls became louder. As Sam continued holding it, and the robot began to cry out with more urgency, I suddenly felt my curiosity turning into gut-wrenching empathy. “OK, you can put him down now,” I told Sam, punctuated with a nervous laugh meant to conceal the rising panic in my voice.

the new breed
Image: Henry Holt and Company

There was no reason for me to panic, and yet I couldn’t help myself: as soon as Sam placed the Pleo back on the table, and it hung its head in feigned distress, I started petting it, making comforting sounds. Sam did the same. This time, I wasn’t touching it to test or figure out its programming — I was actually trying to make it feel better. At the same time, I felt kind of embarrassed about my behavior. I knew exactly what the baby dinosaur robot was programmed to do when dangled in the air. Why was I feeling so agonized?

The incident sparked my curiosity. Over the course of the next several years, I discovered that my behavior toward the Pleo was more meaningful than just an awkward moment in my living room. I tore through all the research on human-computer interaction and human-robot interaction that showed how most people, not just me, treated robots like living things, and I became fascinated with one particular aspect of our anthropomorphism: the way it triggers our empathy.

Love this Column? Pre-order the book!The New Breed: What Our History With Animals Reveals About Our Future With Robots

 

Empathy for Robots

In 1999, Freedom Baird bought a Furby. Furby was an animatronic children’s toy co-created by the inventor Caleb Chung, who would later design the Pleo. It had a big head, big eyes, and looked kind of like a fluffy owl crossed with a gremlin. Furby became a smashing success in the late 1990s, with over 40 million units sold during the first three years. One of the toy’s main features was simulating language learning. Furbies started out speaking “Furbish,” a made-up gibberish language, and gradually replaced the Furbish with English over time (or one of the other languages they were sold in). Their fake learning ability was so convincing that the United States National Security Agency banned Furbies from their premises in 1999, concerned that the toy might pick up and repeat classified information. (The ban was eventually withdrawn when they learned that Furbies had no actual capacity to record or learn language.)

But the Furby capability that caught Baird’s interest wasn’t the language learning simulation. Baird was a grad student at the MIT Media Lab, working on creating virtual characters, and she became interested in what the Furby did when upside down. Like the Pleo, her Furby could sense its direction in space. Whenever it was turned over, her toy would exclaim, “Uh oh” and “Me scared!” On a 2011 Radiolab podcast, she described that this made her feel so uncomfortable that she would hasten to turn the Furby back upright whenever it happened. She said it felt like having her chain yanked.

On the same podcast, the hosts performed a little experiment that Baird had suggested. They invited six children, about seven to eight years old, into the studio and presented them with three things: a Barbie, a live hamster, and a Furby. The hosts told the children to hold each thing upside down and timed how long it took for them to feel uncomfortable enough to want to set the object or animal back down. The children were able to hold the Barbie doll in the air seemingly forever (although their arms got tired after about five minutes). The live hamster was a very different experience. While trying to hold the squirming creature, one of them exclaimed in dismay, “I don’t think it wants to be upside down!” They all placed the hamster back upright nearly immediately, holding it up for only eight seconds on average. “I just didn’t want him to get hurt,” one of them explained. After the hamster came the real test: the hosts asked the children to hold Furby upside down. As expected, Furby was easier for them to hold than the hamster, and yet, the children would only hold it for about one minute before setting Furby back upright.

When asked why they set Furby down more quickly than Barbie, one kid said, “Umm, I didn’t want him to be scared,” another, “I kind of felt guilty in a sort of way,” and another, “It’s a toy and all that, but still....” The show host asked them whether they thought that Furby experienced fear in the same way that they did. They answered a mix of yes and no, and some of them said they weren’t sure. “I think that it can feel pain ... sort of,” said one child. Their answers suggested that they were struggling to reason about their own behavior. “It’s a toy, for crying out loud!” but also, “It’s helpless.” One child said holding it upside down “made me feel like a coward.”

It’s tempting to dismiss these children as naive and confused about whether Furby can actually feel. But why did adults like Freedom and me, who understand that upside-down robots can’t experience fear or pain, have the same response as the children? Despite our rational brains telling us “it’s a toy, for crying out loud,” inflicting the simulated feelings still seemed wrong, and it made us empathize with the robots. While it seemed irrational to me to empathize with a non-feeling machine, I also knew that empathy — how we feel and react toward another individual’s emotional state — is a key component of our social interactions and psychology. I started wondering what it would mean for our future with social robots if people were this uncomfortable with simulated pain.

In 2012, I received a message from an old high school acquaintance, Hannes Gassert. He was one of the main organizers for an event called Lift in Geneva, Switzerland, and wanted to know if I would be willing to give a workshop at their next conference. Gassert was a class above me in high school and thus, according to the laws of the universe, much cooler than me. I said yes immediately. On the phone, we talked at length about empathy for robots, and he was so interested in the topic that we decided to brainstorm and run a workshop together.

We bought five Pleos, the same type of baby dinosaur robot I had at home. On the afternoon of the workshop, about 30 unsuspecting conference participants showed up in our room, all roughly between the ages of 25 and 40. We divided them into five groups and gave each group a Pleo. Then, we gave the groups some time to play with the dinosaur robots and explore what they could do. People immediately busied themselves with petting them, trying to feed them plastic leaves, and observing and remarking on the robots’ behaviors. We heard some awwws and squeals of delight as they realized how the Pleos responded to touch. We told each group to give their respective robots a name.

Next, we wanted them to personalize them more, so we distributed some art supplies, like pipe cleaners and construction paper, and told the groups to dress up their Pleos for a fashion contest. The dinosaur fashion show turned out to be a big hit: all the groups put effort into the challenge, creating hats and makeshift garments for the robots, trying to set them apart. Everyone clapped and giggled as the robots paraded around in their pipe cleaners. (It was too hard to choose a contest winner, so we let all the Pleos tie for first place.)

At this point, we were about 45 minutes into the workshop, and things were going well. Hannes and I hadn’t been sure in advance whether people would be engaged and want to play with the robots for such a long time, but everyone was having a lot of fun with their dinosaurs. It was time to execute our main plan. We announced an impending coffee break and told the groups that, before they left, they needed to tie up their Pleos to make sure they didn’t escape while they were gone. Some of the participants protested, but the groups took the pieces of rope we offered them and leashed their robots to the table legs. Then we shooed everyone out of the room.

When our participants returned after the break, we told them we had some bad news. The robots had been naughty and had tried to escape while everyone was away, so they needed to punish the robots for their unacceptable behavior. The group members looked at each other, some giggling, unsure of what to do. Some of them gently scolded their robots. No, we told them, that wasn’t enough. The robots needed corporal punishment. The participants erupted in protest.

“I don’t know about you, but I would feel extremely uncomfortable letting my small child watch a video of Tickle Me Elmo being doused with gasoline and set on fire.”

When we kept insisting, some of the participants softly tapped their robots on the back, hoping to satisfy us. None of them wanted to hit the Pleos with force. We assured them that it was fine if the robot toys got damaged, but that didn’t seem to be the problem. One participant protectively swept her group’s Pleo into her arms so that nobody could strike it. Another person crouched down and removed her robot’s batteries. We asked her what she was doing, and she told us, somewhat sheepishly, that she was trying to spare it the pain.

Eventually, we removed a tablecloth from the bench we had set up during the break. Carefully laid out on the surface were a knife, a hammer, and a hatchet. It dawned on everyone what was going to happen: the purpose of our workshop was to “torture and kill” the robots. More protest ensued. People winced, moaned, and covered their eyes in dismay, all while giggling at their own reactions. Some of them crouched protectively over the robots.

Hannes and I had anticipated that some people wouldn’t feel comfortable hitting the robots, but we had also assumed that at least some of our participants would take the position that “it’s just a robot, for crying out loud.” Our original plan was to see whether that initial split of people changed if we ratcheted up the violence. Instead, everyone in the room absolutely refused to “hurt” the robots.

Once we had revealed the destruction tools, we improvised and made them an offer: someone could save their own team’s robot by taking a hammer to another team’s robot. The room collectively groaned. After some back and forth, a woman agreed to save her group’s Pleo. She grabbed the hammer and placed one of the other Pleos on the ground in front of her. Everyone stood in a circle and watched while Hannes and I egged her on. She was smiling, but at the same time very hesitant, moving her body back and forth as if to steel herself. When she finally stepped forward to deliver the blow, she stopped mid-swing, covering her eyes and laughing. Then she leaned down to pet the mechanical dinosaur. Once she stood up to try again, she decided she couldn’t do it.

After her attempt, Hannes and I threatened to destroy all the robots unless someone took a hatchet to one of them. This caused some hemming and hawing in the room, but the idea of losing all of them was too much. One of the participants volunteered to sacrifice his group’s Pleo. We gathered around him as he grasped the hatchet. He lifted it and swung it at the robot’s neck, while some people covered their eyes or looked away. It took a few bludgeons until the dinosaur stopped moving. In that instant, it felt like time stopped. Hannes, noting the pause, suggested a black humor moment of silence for the fallen robot — we stood around the broken Pleo in a quiet hush.

The Pleo workshop wasn’t science and we couldn’t draw too many conclusions from an uncontrolled environment. But the intensity of the social dynamics and collective willingness to suspend disbelief in that workshop room made me even more curious about our empathy toward robots, inspiring some later research that I did with my colleagues at MIT. But before I get to that, let me explain why I became so interested in empathy for robots. It wasn’t just about how and why people feel it. I also wanted to know whether projecting life onto robots could make us feel that they deserve moral consideration. In other words, could our empathy for robots lead to robot rights?

In February of 2015, robotics company Boston Dynamics released a video clip introducing Spot, a distinctly doglike robot. In the video, some of the engineers kick Spot, and the robot scrambles hard to stay on all four legs. Everyone in robotics was impressed by the machine’s ability to course-correct and stay upright, but as the video spread more widely, other people took to the internet to express discomfort and even dismay over Spot’s treatment. “Kicking a dog, even a robot dog, just seems so wrong,” they said. CNN reported on the supposed scandal with the headline “Is It Cruel to Kick a Robot Dog?” Websites and memes popped up that jokingly advocated for “robot rights,” using a slow-motion video of Spot getting kicked. The public commotion even compelled well-known animal rights organization People for the Ethical Treatment of Animals (PETA) to acknowledge the incident. PETA didn’t take it very seriously and dryly commented that they weren’t going to lose any sleep over it because it wasn’t a real dog. But while PETA wasn’t interested in this question, I was.

Right after Hannes and I spent that moment in silence with our workshop participants, the tension in the room lifted. We had a lively conversation with the group about their experience and engaged them in a discussion of whether we should treat Pleos with kindness. They expressed that they personally felt uncomfortable “mistreating” them, but when we asked them whether robots should be given legal protection from “abuse,” most of them said no, that would be completely ridiculous. Their pushback struck me as remarkably similar to what the early Western animal rights movement encountered: people were on board with the idea that it felt wrong to be cruel to animals, but they balked at creating legal rules because that would be going too far. What precedents would that set?

Besides, everyone in our workshop agreed that the robot was just an unfeeling machine. Later, Hannes and I looked at the photos we had taken during the moment of destruction. The expressions on their faces said otherwise.

Looking to Tab Another Great Read?How a Brilliant Platform Engineer Saved Twilio From 'Absolute Disaster"

 

Training Our Cruelty Muscles?

For years, science fiction has regaled us with stories of robot uprisings, many of which end up happening because the robots are mistreated by humans. But what if we reversed the narrative? Instead of asking whether the robots will come kick our butts, we could ask what happens to us when we kick the robots. Today, even with state-of-the-art technology like crude toys and biologically inspired machines that are still only rough approximations of animal movement, people are already developing feelings about how these devices are treated. Violent behavior toward robotic objects feels wrong to us, even if we know that the “abused” object can’t experience any of it. And lifelike technology design is improving.

The video platform YouTube flags videos of animal abuse, including cockfighting, and removes them from its website. In 2019, YouTube also took down a bunch of robot-on-robot violence videos by accident. The competition videos, with robots that were trying to destroy each other, were flagged as including “deliberate infliction of animal suffering or the forcing of animals to fight.” When the creators complained, YouTube corrected the error and reinstated the videos. The incident made some news in the tech press, mostly because people thought it was funny. According to a spokesperson, the videos weren’t against YouTube’s policies. After all, this type of thing isn’t real violence ... right?

One of the most personally influential pieces I’ve read on robot rights was a 352-word blog post published by Wired in 2009. It was about a short-lived internet video trend where people set a toy called Tickle Me Elmo on fire. They would pour gasoline over the red animatronic children’s doll and film while Elmo burned, writhing and uttering its prerecorded laughter tracks. As the Wired writer, Daniel Roth, describes, “[the videos] made me feel vaguely uncomfortable. Part of me wanted to laugh — Elmo giggled absurdly through the whole ordeal — but I also felt sick about what was going on. Why? I hardly shed a tear when the printer in Office Space got smashed to bits. Slamming my refrigerator door never leaves me feeling guilty. Yet give something a couple of eyes and the hint of lifelike abilities and suddenly some ancient region of my brain starts firing off empathy signals. And I don’t even like Elmo. How are kids who grow up with robots as companions going to handle this?”

I don’t know about you, but I would feel extremely uncomfortable letting my small child watch a video of Tickle Me Elmo being doused with gasoline and set on fire. It’s not about the destruction of a “thing.” It’s about the fact that this “thing” is too easily interpreted as alive. Many children have an emotional relationship with Elmo and do not view him as an object. Could a video that features burning Elmo be classified as violent, even if Tickle Me Elmo can’t feel?

Or let’s say my child sees a robotic dog in the park and decides to run up and give it a big whopping kick in the head. The doglike device responds by struggling to stay on all four legs, whimpering and hanging its head. Like a lot of parents, I would definitely intervene because my child risks damaging someone else’s toy, but in this case, there’s also a reason to intervene that goes beyond respect for property. As social robots get better and better at mimicking lifelike behavior, I would want to dissuade my child from kicking or “mistreating” them, if only because I don’t want my child to learn that it’s OK to do it to living things.

It’s OK to kick a ball. What about robots? Some of the major companies that make virtual voice assistants have already had to respond to parent complaints that their voice interfaces teach children to be rude and bark commands instead of asking nicely. In response, both Google and Amazon have released opt-in features that encourage children to say the “magic word” when using the device.

Even if we forbid robot abuse in our children’s playgrounds, how should we feel about playgrounds for adults — for example, a nonfiction version of Westworld? Should we let people take out their aggression and frustration on human- and animal-like robots that mimic pain, writhing and screaming? Even for adults, the difference between alive and lifelike is muddled enough in our subconscious for a robot’s reaction to seem satisfyingly alive. And after all, they aren’t harming a real person or animal.

A few years ago, I had the opportunity to talk to Charlie Brooker, creator of the sci-fi show Black Mirror. When we landed on the topic of violence toward lifelike robots, I said, “We have no idea whether it’s a healthy outlet for violent behavior, or...” and Brooker finished my sentence for me, “...whether it just trains people’s cruelty muscles.”

Pre-Order the Book HereThe New Breed: What Our History With Animals Reveals About Our Future With Robots

* * *

Looking at the research on violence toward animals, is the historically popular argument that being cruel to animals makes for cruel people based on evidence? A lot of the original argument seems rooted in elitist assumptions about the barbaric behaviors of the working class. But the connection between cruelty toward animals and other forms of cruelty is persistent. It was what moved Henry Bergh, the founder of the American Society for the Prevention of Cruelty to Animals (ASPCA), to dedicate some of his efforts to fighting cruelty toward children. In fact, the issues of cruelty toward animals and cruelty toward children were so closely related around the turn of the 20th century that over half of all organizations against animal cruelty fought for humane treatment of children as well.

Today, abuse reporting in many states in the U.S. recognizes that animal abuse and child abuse are often linked: there are cross-reporting laws that require social workers, vets, and doctors to report instances of animal abuse and will trigger a child abuse investigation. Violence toward animals has also been connected to domestic abuse and other interpersonal violence. A new field called veterinary forensics even aspires to link animal abuse to serious crimes. Some states in America let courts include pets in temporary restraining orders, and the 2018-enacted Pet and Women Safety (PAWS) Act in the United States commits resources toward housing the pets of domestic violence survivors, trying to address the problem that abusive partners will also abuse or kill pets that are left behind.

Unfortunately, even if we believe that cruel people are cruel to everyone and everything, this doesn’t tell us much about desensitization. The connection doesn’t mean that the animal abuse causes (or exacerbates) the violent behavior; it could just be an indicator. The research on whether children who abuse animals become more violent is mixed, and experts are divided. But what if we knew that abusing lifelike robots had a negative effect on people? We might start to regulate violent behavior toward these types of robotic objects, similar to a lot of the animal abuse protection laws we’ve put in place.

Our dinosaur workshop made me even more interested in this rights question, but, clearly, a lot of information was missing. I decided that, first of all, I needed to know whether we truly empathize with robots. Our workshop participants could have been driven by social dynamics, or hesitated because the robots were expensive (unlikely given what we observed, but who knows). A lot of the research in human-robot interaction, from the Roomba to the military robots, reveals anthropomorphism that suggests empathy, but we are only at the very beginning of trying to understand our feelings for robots. So I conducted some research with Palash Nandy, a master’s student at MIT. Over a hundred participants came into our lab to hit robots with a mallet. Instead of a cute baby dinosaur, we chose something that people weren’t immediately drawn to and used Hexbugs, a small, very simple toy that scuttles around like a bug.

Empathy is a difficult thing to measure. According to Clifford Nass and other researchers in human-computer interaction and human-robot interaction, self-reporting is unreliable, because if someone is nice to a machine during a study and you ask them afterward why, they’ll often look for a “rational” justification for their behavior. So in addition to asking them, we had our participants take a general psychological empathy test and then compared their scores to how long they hesitated before hitting a Hexbug. It wasn’t perfect, but if high-empathy people behaved differently than low-empathy people, it would at least suggest that their behavior had an empathy component to it.

Our results showed that people who scored high on empathic concern hesitated more (or even refused) to strike the bugs, especially when we personified the Hexbug with a name. This let us (cautiously) suggest that people’s behavior toward robots might be connected to their levels of empathy.

Our study was part of a handful of other emerging research on empathy for robots. One study asked people which robots they’d be most likely to save in an earthquake, and the participants reported feeling empathy for robots (even though there was no relationship to their empathy test scores). Other researchers made people watch robot “torture” videos and discovered that it caused them physiological distress. A brain study showed that the brain activity normally associated with empathy went up when looking at pictures of human and robot hands having their fingers cut off. Confirming that robots don’t need to look humanlike for us to feel for them, another brain study suggested that people felt empathy for a robot vacuum cleaner that was being verbally harassed. Given the research, it seems safe to assume that people can genuinely empathize with robots. But that still doesn’t answer the bigger question: whether interacting with robots can change people’s empathy.

The problem with my approach to violence prevention and robot rights is that I’m not satisfied with intuition — I want our rules to be guided by evidence of whether and how robots can change our behavior. But while I firmly believe in evidence-based policy on this question, in some ways, it’s futile. First of all, we may never be able to fully resolve this through research and evidence. But more importantly, attempting to answer the desensitization question may not matter. We can sometimes shift our thinking, but, as we’ve seen with animals, our actual laws around how to treat robots may follow our emotional intuition, regardless of what the academics think.

More Book Excerpts on Built InWhy the Futures of E-Commerce and Media Are Intertwined

* * *

So what is the sensible path forward with robots?

Animal researchers have already dealt with a similar question. For a long time, science dismissed anthropomorphizing animals as sentimental and biased. Anthropomorphism has been so controversial in the study of animals that it’s been called uncritical, naive, and sloppy, and even “dangerous” and an “incurable disease.” But the contemporary animal science community has developed a different view. For example, Dutch primatologist Frans de Waal has coined the term “anthropodenial,” and he argues that rejecting anthropomorphism actually hinders animal science. Even though the field agrees that anthropomorphism is flawed, researchers are increasingly discovering that dismissing it outright can also lead to mistakes.

Contemporary philosopher and animal rights proponent Martha Nussbaum argues in her book Upheavals of Thought: The Intelligence of Emotions that emotions are not actually “blind forces that have no selectivity or intelligence about them.” They are a valuable part of our thinking, she says, because they are able to teach us and help us evaluate what is important. Perhaps the best approach we can take toward robots is the same approach that some animal researchers have suggested we take with anthropomorphism in the natural world: to accept our instinctive tendency, to let it motivate us with awareness, to let our brains guide us in applying it appropriately, and to ask what we can learn from it. Despite the potential benefits of engaging with social robots, some people have argued that any empathy toward them is wasted narcissism, and that we will squander emotional resources that we should be putting toward human rights. It’s certainly true that all of us have limited time and energy, but I don’t completely buy the premise that empathy is spent like video game coins.

While compassion fatigue is real (studies have shown that people’s appetite for doling out charity can be easily overwhelmed), empathy isn’t necessarily zero-sum: parents love their second child as much as their first: with all of their hearts. Some of our work in human-robot interaction also suggests that less empathic people simply don’t care very much about anyone or anything, while empathic people are more likely to be kind to humans and animals and robots alike. A 2019 study by Yon Soo Park and Benjamin Valentino showed that positive views on animal rights were associated with positive attitudes toward improving welfare for the poor, improving conditions of African Americans, immigration, universal healthcare, and LGBT rights. Americans in favor of government health assistance were over 80 percent more likely to support animal rights than those who opposed it, even after the researchers controlled for political ideology.

According to animal ethicist James Serpell, without the emotions that people project onto their pets — even if they’re delusional — our relationships with companion animals would be meaningless. So when people tell me that they feel for their Roomba, I don’t think it’s silly or useless at all. When I see a child hug a robot, or a soldier risk life and limb to save a machine on the battlefield, or a group of people refuse to hit a baby dinosaur robot, I see people whose first instinct is to be kind. Our empathy can be complex, self-serving, and sometimes incredibly misguided, but I’m not convinced that it’s a bad thing. Our hearts and brains don’t need to be opposed to each other — just like humans and robots, maybe we can achieve the best results when they work as a team.

* * *

Excerpted from The New Breed: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us