Should AI Be Used to Interpret Human Emotions?

It might do more harm than good.

Written by Rajeev Dutt
Published on Jan. 11, 2023
Image: Shutterstock / Built In
Image: Shutterstock / Built In
Brand Studio Logo

People have a near insatiable need to understand how other people feel. Organizations in fields from healthcare to advertising have tried to understand people’s emotions by using panels, surveys and focus groups, but these are notoriously unreliable. 

What Is Emotion AI?

Emotion AI is artificial intelligence that uses text, audio, video or a combination of them to detect and interpret human emotion.

More recently, organizations have applied the promise of AI to this issue. AI can analyze voice patterns, eye movement, hand gestures and facial expressions and thus deliver more accurate data and insights in near real-time. Or can it?

More By Rajeev Dutt Are You Sure You Can Trust That AI?

 

What Is Emotional Data?

 Emotional data differs from other data, such as dates of birth and email addresses, in four critical ways: 

  • Individuality: Each person’s emotional data, based on their feelings, public and private behaviors and thoughts, are unique.
  • Abstractness: A person’s state of mind, which can vary widely day to day, can result in different emotions displayed publicly in different situations.
  • Opaqueness: There is no clear way to evaluate and generalize one’s feelings.
  • Proliferation: Misinterpretation of emotional data can easily spread to skew multiple decisions.

Human emotions and how people communicate them are highly subjective. As a result, generating accurate insights is very challenging to emotion recognition AI. Here are three examples of how algorithmic bias can creep into emotion recognition AI: 

  • If an HR department employs emotion recognition AI during a job interview process, a person of Slavic background, where smiling is reserved for friends and family, may appear as overly negative. Candidates from Asia, where in many countries smiling is viewed as a form of politeness, may appear more positive and enthusiastic than they actually are.
  • In the classroom, if a school adopts emotion recognition AI analysis to determine which students are focused and which need remedial assistance, there is significant room for algorithmic bias. Some students do well with intense solitary study, others prefer active learning and still others favor working in study groups. 
  • In retail, if a customer calls to complain, an emotion recognition AI platform can help a customer service rep determine when to offer a refund or a free coupon. However, accents and voice tones can skew the algorithms to encourage reps to be more or less generous, with the potential of losing a customer for good.

 

How To Improve Model Accuracy

While the weaknesses of emotion recognition AI algorithms are well known, some strategies can improve model accuracy. For instance:

  • Add more data. The more data fed into a model, the more accurate the analysis becomes and the deeper insights it can generate.
  • Add context to the data. Additional information about data points can improve results. When analyzing job candidates, for example, including the person’s previous job history and education can be useful. Another example is to use other features of the conversation such as gestures (in cases where a video or image is available), or the content of what is being said (someone complaining about the quality of service is unlikely to be happy).
  • Modify the questions. Training the model to answer different questions can lead to better insights.
  • Combine algorithms. Running data through multiple emotion recognition AI algorithms can improve accuracy through actions such as canceling out the bias in any one model.
  • Use cross validation to train models. Divide data into smaller chunks and use each chunk to train the model. Training the model in chunks and then averaging results tends to improve accuracy.
  • Use adversarial methods to reduce probability distribution mismatches, overfitting of the models, and reducing the risk of bias.
  • Increase the variation in data such as increasing the demographic range of the training data.
  • Introduce explainability. The AI should be able to explain its decisions. Why did the AI label someone as angry or sad?

Optimizing model accuracy can prove to be very difficult. Undoubtedly, emotion recognition AI will be used in many areas of an organization, ranging from operations to HR and finance. But should it?

More Reading on AIEmotion AI Technology Has Great Promise (When Used Responsibly)

 

Why Emotional AI Is Inherently Controversial

Because of its nature, emotional AI will inherently be controversial to use. While technical hurdles such as bias and accuracy can be addressed, to a certain extent, human beings are more complicated. Sarcasm, age, context and state of mind all play a part in how we express our emotional state. The potential for misuse is enormous. Imagine if one could determine their enemies just by looking at them. It’s a frightening thought. Or even worse, AI could claim that someone is an enemy who isn’t.

The belief that an AI can assist us with something that evolution has spent millions of years training us to do is fundamentally hubris. Emotions are variable, volatile and contextual. An AI that tells me “I sense that you might be sad, shall I call a therapist?” would be as useful as the infamous Clippy

If the objective, on the other hand, is to make interactive voice response (IVR) more human-like in a limited context, then this has a greater chance of being successful. Emotional AI can, within narrow confines and in situations with limited impact, be useful. Passing an irate customer to a human agent would be a good example. An AI that monitors an interview with a candidate for a job, though, could lead directly to discrimination and other more serious ramifications.

Emotional AI does have its uses, but it can be fraught with moral or ethical concerns and be open to abuse by repressive governments by identifying enemies of the state, identifying subversive material, internet content filtering or actively seeking to elicit emotional responses from people. Emotional AI combined with powerful NLP algorithms such as GPT3 creates a witch’s brew for repression and authoritarianism. 

Rather than questioning if AI can be used for emotional analysis, people and organizations must decide if it should be.

Explore Job Matches.