Do Robots Have a Race?

Despite their mechanical nature, people seem to impose racial identities on robots. That’s a problem.

Written by Ari Joury, Ph.D.
Published on Nov. 03, 2021
Brand Studio Logo

Do a Google image search for “robot” and you’ll find predominantly white or metallic apparati. It’s no different in movies, whether you’re checking out Blade Runner, Star Wars, The Terminator, or good old Metropolis. Almost all robots from movies are white or metallic in their appearance. 

Maybe a white robot blends in a little better with its surroundings, you might reason. Or maybe it just looks a little more pleasing because it doesn’t swallow as much light as a darker robot would. Or maybe we envision robots in a bright future, and white robots look kind of bright.

The reality might be a lot darker, according to recent research. People favor white robots because people are racist. Ouch.

Robots and Racial Bias

The shooter bias paradigm works as follows: Study participants were instructed to shoot an agent as fast as they could if they’re carrying a gun. They should not shoot, however, if the agent is carrying a benign object, like a phone, wallet, or a can of soda. This experiment happened digitally. The participants saw a picture of an agent and pressed a button to shoot. Previous studies on human agents have found that, regardless of the participant’s own race and ethnicity, they tend to shoot Black agents faster and more often than white ones. New research has shown that this same bias holds for Black and white robots.

More From Rhea MoutafisWhy Aren’t Governments Paying More Attention to AI?

 

Race and Robots

You might think that this is over the top and argue that robots aren’t human. You might also argue that any normal person wouldn’t treat a robot like a fellow human because surely people aren’t that stupid. Unfortunately, that’s far from the truth.

Anthropomorphism, or the practice of creating objects that are similar to humans, is everywhere in robotics. This principle also entails the capability of the human mind to see human traits in everything it interacts with. 

Previous research has already shown that humans tend to extend gender stereotypes to machines. If you ask Siri or Alexa what their genders are, they’ll tell you that they have no gender. Yet their voices sound female to human ears. And this is by design: According to several studies, people prefer hearing male voices when it comes to authority but want to hear a female voice when they need help.

And now, evidence is building that people tend to prefer white robots over other colors. Of course, this preference isn’t really about the robots themselves. At this point in history, at least, robots don’t have consciousness. So, it’s not like you’re going to hurt a black robot emotionally or materially by not employing it in a leadership role. 

The real problem is what kinds of biases this reinforces in humans. As awkward as robots might behave, their capabilities already exceed human ones in many areas. They fulfill repetitive tasks without complaining, work with incredible precision, and don’t need holidays and weekends off work. 

Now, imagine that all or most of these robots are white. Of course, people know that robots aren’t human, and that white robots have no more similarities with white people than with Black people. In their subconscious thinking, however, they might associate the white robots’ traits with white people. Not only is this association unjustified, but it might also reinforce white supremacy in the broader culture.

Researching how humans view robots helps mitigate such scenarios. What’s more, how this research is done is quite fascinating in itself.

 

Black Robots and the Use of Force

Christoph Bartneck and his colleagues from the University of Canterbury in New Zealand are among the pioneers of research on human-robot interaction. In a seminal study from 2018, they found that the shooter bias paradigm doesn’t only apply to Black and white humans, but extends to robots, too.

This paradigm works as follows: Study participants were instructed to shoot an agent as fast as they could if it was carrying a gun. They should not shoot, however, if the agent is carrying a benign object, like a phone, wallet, or a can of soda. This experiment happened digitally. The participants saw a picture of an agent and pressed a button to shoot.

Previous studies on human agents have found that, regardless of the participant’s own race and ethnicity, they tend to shoot Black agents faster and more often than white ones. Now, Bartneck and his colleagues have shown that this same bias holds for Black and white robots.

For this experiment, they used the Nao robot and photoshopped its color to match the typical skin tone of a Black or a white person. Then they mixed pictures of these Black and white robots carrying guns or other objects with the pictures from the original study. They presented all of these to the study’s participants and found that they reacted to the Black robots in the same way they did Black humans.

The implication here is that the participants ascribe some kind of race or ethnicity to the Black robots. When they asked participants what race the robots had, even though the option “no race” was also available, more than 90 percent selected a race, and some 70 percent of the participants viewed the robots’ color as corresponding to race. 

What remains uninvestigated, though, is whether people view robots with a solid black or solid white color as having a race, too. It’s worth investigating for all the reasons stated above — but, as the study itself admits, so far nobody has checked on this. 

In other words, in a future where robots of all races walk around in the cities, humans might cause more damage and death to Black robots. At least, this might become the case if humans stay as racist as they are today.

 

Brown Robots Level the Playing Field 

In a subsequent study in 2019, Bartneck and his colleagues wanted to check how robust their results were. In particular, they wanted to find out whether the findings were only due to social priming, whether this bias extended to other races or ethnic groups, and whether robots looking more or less human would change the results in any significant way.

Some readers of Bartneck’s original study had pointed out that his findings might be due to social priming. This is a phenomenon where people, when presented with some type of information, allow that information to influence their subsequent actions. During the experiment, the researchers had asked participants about their races and nationalities, and this might have subconsciously modified their subsequent behavior. When they repeated the experiment without asking the participants these questions prior to the experiment, however, they still found the same racial bias in the results.

Another point of criticism was that the original study had only included Black and white robots, which is an extreme simplification and erases dozens of other races. Therefore, they included a Brown robot in later versions of the study. Astonishingly, with these three robots in the test, the shooter bias vanished. Participants started shooting all robots with the same speed and accuracy, regardless of their appearance. 

The authors of the study interpreted this result as an unexpected testament to the value of diversity. This is just one single study, however, and to this date the only one of its kind. They had an acceptable amount of participants and agent pictures — each of the 160 participants was presented with 60 pictures to shoot or not shoot — but statistical deviations are possible regardless.

In addition, these findings contradict the literature about human agents. The shooter bias seems to persist in a multiethnic context, where Black people tend to get shot more often and faster than other ethnicities. The participants’ performance with regards to white people tends to be quite similar to that of people of other ethnicities.

Finally, the authors included other robots in the study. The Inmoov robot seems a little more human-like than the original Nao robot, and the Robosapien looks more machine-like. The performance of the participants didn’t change, however, depending on the humanness of the robot’s appearance. When asked, the participants also didn’t indicate that they found one robot more human-looking than another. 

In conclusion, making more human-looking robots doesn’t change people’s behavior towards robots. Their color, however, results in humans seeing some type of race, and in them applying the same biases as they do towards other human beings. Whether having more than two different races for robots eliminates these biases remains to be confirmed in future studies.

More in RoboticsWhy Are Robots Designed to Be Cute?

 

Toward a Less White Future?

If robots can have race, can they also be bi- or multiracial? This, at least, would reduce the probability that we end up with the stereotypical and very white robots that we know from so many movies. It would also help us not fall into another dystopian scenario in which robots of all colors exist, but those who are authoritarian and commanding are white, and those which are obedient and servile are those of color. 

If multiracial robots become widespread, this would make both of these dystopian scenarios less likely. The thought is somewhat inspired by Amazon’s recent unveiling of Astro, a domestic robot that checks for security leaks and intruders when the owner is out and serves them drinks and entertains them when they’re back. Astro is a black-and-white robot, with a black screen and front parts and white wheels and body.

Elon Musk’s recently announced Tesla Bot goes in a similar direction: Its head, neck and shoulders are black, while the rest of the body is white. Its appearance reminds me a little of a light-skinned woman wearing a black headdress. As such, it might also be a statement against religious bias. 

These two recently proposed robot models aren’t going to magically erase all the horrid human biases we have. Nor is it clear whether other robot manufacturers will follow suit and center their models around less White designs. But they do provide an interesting alternative and bring some hope for a future with less bias and bigotry. 

Only time will tell how and when we’ll eliminate biases from robots, AI, and other technology. But it seems like big tech companies are recognizing the problem. I, for one, remain hopeful about a future with less outdated stereotypes and more diversity.

Explore Job Matches.