Robots don’t have emotions. Or at least that’s what I thought a few weeks ago.
“That’s a huge oversimplification,” Maja Mataric said. “Engineering has moved forward. We have models of emotion and we know how to create machines that appear emotional. Yes, they don’t feel the way people feel. But how do people feel? Science doesn’t really understand that yet.”
Mataric is professor of computer science, neuroscience and robotics at the University of Southern California, and a founder of the field of socially assistive robotics. I called her because I wanted to know why robots are so freaking cute.
A juvenile question, probably, given that Mataric is busy using novel insights about human cognition and behavior to craft robots that help people lead safer, fuller lives. But, as it happens, cuteness isn’t entirely irrelevant.
Consider BB-8, WALL-E, Baymex — the list goes on. Despite fatalistic headlines declaring robots will soon commandeer our jobs and steal our girlfriends, it’s the brave, sweet-hearted cinematic robots that tend to stick in our brains.
“Cuteness, in particular, is a large part of social function.”
That stickiness often makes cute robots well-suited for socially assistive roles, Mataric said.
“Cuteness, in particular, is a large part of social function because we, as social creatures, are wired to recognize features of cuteness and naturally respond positively to them. That’s why babies look the way they look.”
Much like robots, humans are wired to respond a certain way given certain inputs. When designers and engineers study that wiring to make products more appealing, you get the emerging field of user experience, or UX, design.
In Mataric’s field, a cute robot may make users more willing to practice social skills like eye contact or take medications on time. For consumer robotics, cuteness may make people more willing to buy and use a product, or — and I’m spitballing here — tech reporters more likely to develop a sustained obsession with large-eyed robotic concierges.
“You say, ‘They end up being so darn cute.’ I don’t think they just end up being cute. It takes design effort to make a robot cute and not creepy,” Mataric told me.
“It’s actually awfully easy to get it wrong. So I want to give credit to folks if their robots look cute, since that takes work.”
Making a cute robot is about crafting a character
For many roboticists, cuteness isn’t a concern. A robot that performs repetitive tasks in a fulfillment center needs to focus on avoiding collisions with humans — not winning their affection.
Even robots that interface directly with humans often don’t need to be cute. In eldercare, for instance, some research indicates people are less receptive to suggestions made by cute robots. (Would you want this thing giving you directives?)
The study of human-robot interaction, or HRI, is too new and too broad to yield any definitive insight on the optimal look for a certain type of robot. That means product teams, like the one that created the small, wheeled automaton Misty, get to experiment.
Misty is a product of Misty Robotics, a spinoff of toy robot company Sphero. She’s got a big head and large eyes displayed on an LCD screen — quintessentially cute, baby-like qualities — as well as an array of movements and sounds that help her show “emotion.”
When it comes to function, Misty is largely a blank slate. She’s intended as a platform for developers to create their own skills and use cases. That means her designers had to make her versatile enough to fit a variety of environments and appealing enough to draw the right kind of attention.
At first, Misty Robotics founder and head of product Ian Berstein figured they’d make Misty look like Sphero, his former company’s eponymous round, faceless robot that communicates only with motion and lights. Then, drawing on character fundamentals he studied under his one-time mentor, Disney CEO Robert Iger, he reconsidered.
“When you only have a rolling movement and an LED, it’s really hard to convey emotion and create that connection with the user,” he said. “When we thought about Misty, we knew we wanted to put some human elements, but not make it humanoid.”
“I can’t really describe the uncanny valley, but you just look at it and it’s super obvious.”
That wariness about humanoid features is not unique to Bernstein. Make a robot too abstract, and people may lose interest in interacting with it. But make it too realistic, and you risk another, weirder problem — the uncanny valley.
“I can’t really describe the uncanny valley, but you just look at it and it’s super obvious.” Bernstein said. “Like Misty is super cute, and then you look at another robot, and you’re like, ‘Oh my god, you’re my nightmare.’”
“Uncanny valley” refers to the metaphorical place where products that imperfectly resemble humans — and, as such, freak their users out — go to die. To strike that human-but-not-too-human balance, Bernstein and his designers turned to some unlikely consultants: cartoonists.
Cartoons look like humans, but some of their features are exaggerated for effect, while others are erased entirely. Turns out, that aesthetic also works well for robots.
“I talked to a lot of character artists about what features are most important for conveying emotion,” Bernstein said. “It was things like eyebrows, and that’s why Misty has eyebrows.”
During the early stages of Misty’s design, the product team of about 10 people pitched around 150 form factors, Bernstein said. Those designs weren’t based on strict engineering specs, so team members could pitch anything they imagined.
“There was a design that had little Huey pads fold out of Misty, and they said, ‘If it comes to the stairs, a little helicopter can fold out, and she can fly up the stairs,’” he said. “But then we were like, ‘OK, that’s going to be way too hard to create.’”
Eventually, they settled on about eight models to 3D print, then two to outfit with motors and test.
Throughout the process, Bernstein pushed to get Misty down to her current 35.5-centimeter height. This was partially so users could conveniently fit her in a carry-on. It was also to ensure her height didn’t take away from her approachability as consumers get used to seeing robots in places like hotels and offices.
“We thought we’d get people comfortable with these robots around us in a form factor that’s less intimidating, and then maybe the next version is a little bit bigger,” he said.
One of the most notable changes the team made along the way was adding a third degree of freedom to Misty’s head. At first, she could only look right and left. Adding a curious-looking tilt to the side, Bernstein said, was a home run.
“People make things personal, which is sometimes for the worst, but for the better in our interaction with robots, where it does feel like we have a relationship with them.”
Through a combination of eye-and-eyebrow positions, head-and-body positions and sounds, Misty conveys seven main emotions. These emotions come prepackaged for developers, which means they can use code to tell Misty to respond angrily without having to build what that response entails. But because developers can decide how Misty responds to different inputs — like shrieking at a loud noise, for example — her personality will depend entirely on her owner.
While Misty has a strong character, Bernstein said, much of the behavior her original product team programmed is random. If she happens to play a sound or cock her head at the right moment, people tend to think she’s reacting to something they did.
“People make things personal, which is sometimes for the worst, but for the better in our interaction with robots, where it does feel like we have a relationship with them,” he said. “I mean, there’s a community of people that make clothes for their Roombas.”
Making a cute robot is about thinking outside the box
Before you congratulate yourself for never putting clothes on a vacuum, consider if you’ve ever named your car or yelled at a vending machine.
As it happens, humans are excellent at connecting emotionally with inanimate objects — even if they don’t possess cute physical features.
In a 1944 experiment by psychologists Fritz Heider and Marianne Simmel, 34 undergraduate students watched this video of two triangles and a circle moving in and out of a rectangle.
When asked to recall what happened in the video, just one student described the figures in the video as shapes. The rest described them as humans or animals, and explained their behavior in terms of desires and emotions. Because the shapes moved distinctly and independently, observers gave them minds.
This finding has implications for consumer roboticists today. Big, blinky eyes are great, but even the smallest details can make a robot appear cute.
Take Gita, a knee-high robot from Piaggio Fast Forward that holds her owner’s things and follows them while they walk through their day.
Gita runs no risk of falling into the uncanny valley. A rotund little box with treads for rolling, she is more suitcase than character. So why would a journalist who spent a day with Gita use words like “anxious,” “humble” and “optimistic” to describe her?
“All those motions are really about the dynamism and the animation rather than the form, which is not to say that form doesn’t matter, but we always want the features to take a background.”
The answer, according to PFF CEO Greg Lynn, lies in Looney Tunes character Wile E. Coyote and his nemesis, the Road Runner.
“Their characters are in the way they move, much more than one of them being a coyote and the other a running bird,” Lynn said. “For us, it was always about the way Gita moves. All those motions are really about the dynamism and the animation rather than the form, which is not to say that form doesn’t matter, but we always want the features to take a background.”
When the person — or, more accurately, pair of legs — Gita has paired with gets too far ahead of her, and she accelerates to her maximum speed of six miles per hour, she borrows a move from Road Runner, leaning back to show she’s working hard to catch up.
While developing the movements that help Gita read as a character rather than an object, Lynn’s team spent time studying the movements of animals that are particularly talented at, um, following.
“I would bring in a group of alpacas, or we would send people to a farm to get followed by a herd of sheep,” he said. “I’m sure little bit unconsciously, the design team was much more influenced by those kinds of things. I would say geese and sheep in particular, which are amazing followers.”
Gita also follows Road Runner creator Chuck Jones’ fourth fundamental rule: “No dialogue ever, except ‘beep-beep!’”
Gita’s “beeps” are actually a variety of sounds developed in partnership with Berkeley College of Music. Lynn’s favorite, he said, is her pairing sound.
“We wanted to make sure there was an emotion to the sounds, but it wasn’t an emotion you knew,” Lynn said. “They’re symbolic.”
“There was a six-month period where our design people were peeling googly eyes off Gita prototypes every morning. It was total design warfare on the part of the roboticists.”
Throughout the design process, which involved 30 different form factors and hundreds of models, the main point of contention between Gita’s designers and her engineers was eyes. She’s got a lot of them, since she has to navigate public spaces, and, like Misty, could have come with those computer vision sensors hidden behind a shield and a more human-like set of eyes presented to the world. Her roboticists, largely from consumer product backgrounds, thought eyes would make Gita more relatable. Her designers disagreed.
“There was a six-month period where our design people were peeling googly eyes off Gita prototypes every morning,” Lynn said. “It was total design warfare on the part of the roboticists. They really thought that Gita needed eyes that could blink and look around.”
In the end, the designers won out.
“The eyes were just too cloying,” Lynn added.
Instead, the team chose to display Gita’s vision sensors for all to see. That way, users better understand how she detects their movements — and don’t feel they’re being monitored in a way that’s not transparent.
The final result makes Gita resemble a many-eyed alien, and — against all odds — it’s cute.
“There’s this adorable set of videos called Lucas the Spider,” Lynn said. “Somebody recorded their six-year-old nephew and then animated a little jumpy spider. When we saw it, we said, ‘Oh, that’s absolutely Gita.”
If you say so, Greg.
When I spoke to Lynn about Gita’s product launch a few months ago, he was adamant that Gita is not meant to be a friend. He still is. The goal, he said, is for users to forget she’s there and enjoy time walking around with friends and family with their hands free. That requires a delicate balance of form and function, cuteness and utility, affection and apathy.
“In our experience,” he told me, “When Gita starts moving, everybody lights up and starts smiling and says, ‘Oh my gosh, this thing has a personality.’”
Robots are a reflection of their users — & their creators
Speaking with Bernstein and Lynn, I couldn’t help but make connections between the robots and their patrons. Friendly, mild-mannered Bernstein talked about Misty’s helpfulness and flexibility, while dynamic Lynn touted Gita’s energy and charm.
Each detail on a robot is the result of months of imagining, iterating and, often, bickering. The rest of us are hardwired to take those details and fill in the blanks, granting emotions and personalities to the machines that carry our purses or greet our hotel guests.
Some robots aren’t cute — in fact, many are frightening. We worry they’ll take our jobs, forge our signatures and time-travel to 1984 to kill Sarah Connor.
“What robots are and can be is only limited by our imagination and ingenuity.”
Ultimately, however, robots will be what their designers and engineers make of them. Bad or good, robots are a reflection of us. Their functions and forms reveal their creators’ goals and imaginations. If we want robots to reflect the best of us — the sweet and the caring, the smart and the innovative — we must make sure those people get the chance to step up to the mirror.
“Robots are the result of what we create. So, if people think robots have to be hyper-rational, that’s only because that’s what they saw in movies,” Mataric said.
“Or maybe there’s an engineer who created such a robot because he was, perhaps, hyper-rational. This is why we need diversity among engineers. What robots are and can be is only limited by our imagination and ingenuity.”