Human-resources professionals around the world are embracing artificial intelligence. In fact, 88 percent of companies globally use some form of AI in HR, according to a Mercer report. While the technology’s applications range across a variety of HR functions, from recruiting and candidate engagement to employee management and training, businesses need to be cognizant of the potential pitfalls of indiscriminately embracing AI.
3 Ways to Overcome AI-led Biases in HR
- Train anyone involved in AI algorithms to identify and solve potential biases.
- Use in-house and third-party data resources to create a more robust and diverse pool of datasets.
- Define your company’s bias and fairness standards to guide the choice of AI tools.
AI systems are only as good as the data they are trained on. If HR datasets carry inherent, historical human bias, the systems built using these datasets will naturally result in biased outcomes.
However, with sustainability and diversity, equity and inclusion initiatives taking center stage, organizations can no longer afford to let AI-led biases affect their talent decisions. Here are two main types of AI-led biases to watch for, plus ways to curb them.
AI-Led Biases in Recruiting and Candidate Engagement
Resume scoring and candidate screening are some of the most prominent AI applications that HR professionals use in the recruiting process.
These screening tools traditionally leverage pre-trained datasets to quickly identify candidate attributes that best match the job requirements. However, they are trained on how humans would have manually screened the candidates. Harvard Business School Research shows that any candidate who is qualified for the job but fails to meet the highly specific parameters on the automated hiring system could be passed over. This means that the system screens out potential hires before it reaches any human eyes. This is definitely a problem in today’s competitive market.
Apart from this, human biases can also easily seep into the AI systems, resulting in specific preference for candidates of certain backgrounds, ethnicities, genders and experience levels. Since AI learns from past data, if humans preferred candidates of certain backgrounds over others, the AI system will reflect those decisions as well in a subtle way.
AI-Led Biases in Employee Engagement and Management
The Great Resignation has reminded us that talent retention is not easy. Phenomena like quiet quitting have also shown us why employee engagement is crucial. Employee engagement in the United States stands at an appalling 32 percent.
AI can help here. In fact, besides recruiting, employee engagement and management are another key function of AI in HR.
AI algorithms, chatbots, predictive models and sentiment analysis tools help HR departments measure employee satisfaction, evaluate employee performance, devise better HR policies, curate training opportunities and map talent needs. However, there is also the imminent risk of AI-led biases in these tools jeopardizing engagement efforts.
For example, the results from an algorithm assessing employee performances can be skewed in an unfair way due to unseen biases. This can happen when algorithms are based on data shared by biased human managers, thus spreading more of this bias, and at the same time, allowing people to remove themselves from accountability by blaming the AI system when these biases are confronted.
Even when it comes to sentiment analysis tools, it requires the algorithms to be trained on what employees have historically valued, how that compares to the organization’s work culture, and techniques historically used by HR teams to identify patterns of dissatisfaction as well as hot-button issues. However, these systems stand to depict a poor pulse on employees’ values, sentiments and dissatisfaction if not everyone who is supposed to provide the data for analytics does so.
How to Overcome AI-Led Biases in HR
While there are evident limitations when it comes to AI’s applications in HR, the benefits of these systems far outweigh the challenges. Therefore, it’s in the interest of HR professionals to understand the best ways to overcome AI-led biases.
Train Employees to Identify Bias
Bias must be prevented throughout the HR cycle. Just the simple understanding that these systems are susceptible to biases, and are not the be all and end all in the decision-making process, can go a long way.
Training anyone involved in designing, developing or using AI algorithms, be it employers, HR department or IT specialists, to identify and solve potential biases is imperative. With this approach, business leaders as well as HR professionals are always actively monitoring for unethical issues and preventing bias.
Specifically in hiring, identifying potential bias can include analyzing the output and validating not just the selected candidates but most importantly the ones who were rejected. If possible, we must look for reasons the AI system has made the decision. For instance, even if the model doesn’t evaluate race, it may be indirectly inferred based on a person’s name. If the decision is based on such an unusual reasoning, the AI system must be scrutinized and tuned accordingly.
Focus on Data Quality and Variety
There is an abundance of data and data sources in today’s digital world. The first step to making smart data-led decisions is understanding the types of data that are relevant to HR. These include everything from hiring and HR policy data to employee engagement tools and historical data around diversity and inclusion.
Ensuring that the quality of an organization’s HR data is up to the mark is imperative. Equally important is to understand if one’s data sources are adequate and reliable. Including data that has been accumulated from a source where a certain background is predominant will cause the system to be biased. It is important to monitor how diverse the data that we train on is.
Today, HR professionals have a lot of third-party data sources at their disposal. Using a mix of in-house and third-party data resources can help create a more robust pool of datasets.
Define your Own Bias and Fairness Standards in HR
HR teams can develop a single standard for ethics, fairness and bias or they may set different thresholds for different groups and situations. Regardless, they should invest both time and resources in identifying/developing the most diverse dataset that is representative of their organization’s goals for fairness.
What constitutes fair in one organization may be applicable at another. For example, fair pay standards in remote work models differ widely. Some organizations follow a location-based pay structure to account for the varied cost of living expenses among remote employees. Other companies regard merit/value based pay, where employees performing the same work get paid the same, regardless of their locations and cost of living.
Periodic auditing can help in setting these standards. This can include a review of performance evaluations, employee training needs and even rejected applications and whether or not those rejections were justified. More specifically, HR leaders must keep an eye on the field of AI research and stay updated with specific software, its limitations and new findings. For reference, they can peruse best practices from companies such as Google AI or IBM’s AI Fairness 360 Framework.
A Fair Foundation
The practice of ethical AI must be the foundation of fair HR processes and an enabler of business growth. Continuous effort and compliance toward ethical AI training and AI-supported hiring by developing fair job descriptions and embracing diversity must be on every organization’s radar.
All things considered, being progressive in HR through the use of powerful technology is key. However, it shouldn’t be done without confidence in the data or in the absence of proper data standards. It needs to be done with HR professionals armed with the necessary skill sets to use AI as a tool, not an absolute substitute for human decision-making.