How HR Leaders Can Prepare Their Organizations for AI Adoption

As AI solutions become increasingly prevalent in all kinds of workplaces, HR leaders must work to ensure they’re incorporated smoothly and safely into existing processes.

Written by Richard Mendis
Published on May. 15, 2023
Image: Shutterstock / Built In
Image: Shutterstock / Built In
Brand Studio Logo

Warnings of the potential dangers of AI have escalated recently. Elon Musk has called the race for AI development “Out of control” and said, “It has the potential of civilization destruction.” Every world-changing technology, from electricity to cars, to phones, to the internet comes with inherent risk.

In the age of AI, human resources (HR) leaders should worry less about doomsday and more about finding pragmatic ways to balance the productivity benefits of AI against its potential downsides, such as bias, data privacy, and copyright infringements. As tech leaders and legislators scramble to unite around commonsense regulations on the development and use of AI, HR professionals have an opportunity, based on their own early adoption of the technology, to broadly define the safe and ethical use of AI across their own organizations. 

Here’s how.

4 Ways HR Leaders Can Prepare for AI Adoption

  1. Experiment with AI solutions in the HR space.
  2. Take inventory of AI use within the organization.
  3. Define processes to support evolving AI regulation and compliance now, before it’s no longer optional.
  4. Train employees on AI compliance.

More on AI and WorkIs AI Coming for Your Job?

 

1. Experiment With AI Solutions in the HR Space

To effectively shape organization-wide policies on the use of AI in the workplace, HR professionals should gain an understanding of the technology’s benefits as well as potential landmines around bias, misuse, and other adverse impacts. Many organizations already know how AI can be deployed successfully and legally in an HR context. Even if HR hasn’t yet had a chance to experiment with AI, there’s still time to do so ahead of increased regulation. HR professionals may even find AI solutions that aid in detecting or lowering unconscious bias in hiring and talent management practices.  

For example, conversational analytics and interview intelligence solutions may help reduce bias in hiring decisions by flagging potentially inappropriate interview questions regarding age, race, gender, ability, marital status, or other personal traits. This function trains managers to be better interviewers and encourages them to stick to questions that are relevant to the target job function. These solutions also provide an intelligent synopsis of the conversation between the candidate and the interviewer, easily allowing HR professionals to verify that the candidate was evaluated based on their skills and experience relevant to the actual job description. This modality helps HR departments hire smarter and faster with less unconscious bias by augmenting human decision-making, not replacing it. 

Through the adoption of AI solutions for hiring, interviewing, performance management, employee communication, or other talent management functions, HR professionals can experience firsthand the complexities of implementing AI solutions in the workplace in compliance with local, state, and federal guidelines. The experience of using AI solutions within their own processes will prepare those HR leaders to take a broader and more strategic role in AI governance across the organization because they are more attuned to the fine line between increasing productivity with AI and relying on it to the point where it replaces human decision-making and runs afoul of regulation. 

 

2. Take Inventory of AI Use Within the Organization

Unbeknownst to HR, various departments in your organization or contractors working with your company may already be using AI technology. If you don’t already know about such solutions in use, HR should work with the information technology (IT) team and other departments to create a list of all the AI-related software that might impact decisions related to employees, customers, or partners. 

The IT team can provide a list of software assets for custom solutions and finance can provide a list of purchased solutions. HR must then conduct interviews or surveys with internal teams to determine how teams use these applications, as well as with external vendors to evaluate their commitment to the ethical use of AI. One method of doing this is to create a standardized inventory template that documents each AI solution, its purpose, data sources, and ways it may be subject to potential bias, discriminatory practices, data privacy concerns, or copyright infringement. You should regularly update the template, considering the rapidly evolving landscape of AI solutions for the workplace. Once HR leaders have gathered the inventory of AI applications in use, they can meet with stakeholders to determine how to mitigate any potential downsides. 

 

3. Define Processes to Support Evolving AI Regulation and Compliance Now, Before It’s No Longer Optional

As more and more states and municipalities pass legislation governing AI, HR professionals will be among the first impacted because the initial focus of these laws covers discrimination, bias, and privacy, all of which overlap significantly with HR processes related to hiring, promotion, and layoffs. HR leaders must be agile and ready to implement processes that ensure their organization will be able to benefit from AI-driven productivity, while also ensuring that these solutions do not replace human decision-making in any process. 

To get ahead of the curve, organizations and their HR leaders can familiarize themselves with existing AI guidance for employers from the Equal Employment Opportunity Commission (EEOC), the Department of Justice (DOJ), and the White House. These guidelines are mostly concerned with protecting civil rights, enforcing the Americans With Disabilities Act (ADA), promoting transparency, and ensuring data privacy. A few of the best practices these guidelines suggest include:

Best Practices for Using AI Solutions in Business

  • Be transparent when using an AI tool and disclose how it works
  • Provide instructions for employees or candidates on how to ask for accommodations when using an AI tool 
  • Ask vendors if and how the AI accommodates people with disabilities
  • Only use AI tools to evaluate traits that are necessary for performing the job’s duties
  • Use AI tools to augment human decisions, rather than replace them in any process related to hiring, promoting, or firing

The federal guidelines, however, are just one harbinger of what’s to come. HR leaders must also have their heads turned to legislation proposed in the states, municipalities, and industries they operate in. Some states are moving legislation forward faster than others, and the concerns around AI in the workplace can vary greatly because each state has different legislative agendas, priorities, and levels of understanding of AI. 

For example, Maryland passed a consent requirement law for the use of facial recognition in interviews, and New York is now requiring that automated employment decision tools (AEDTs) be audited for bias before employers use them. These laws require HR departments to obtain documented consent from a candidate before using an AI solution to create a facial recognition template during a job interview and hire an independent auditor to evaluate the purpose of the AEDT in use, how it was developed, and any adverse impact it may have on hiring decisions. 

Not all laws concern employment, however. Industries that use AI to capture biometric data, such as fingerprints, must comply with the existing Biometric Information Privacy Act (BIPA). This means HR departments must obtain written consent before collecting biometric information, disclose how the data will be used, and establish secure processes for storing and deleting biometric data. 

There’s also an ongoing initiative to determine if content generated by AI tools can be copyrighted. If an organization’s marketing department uses AI-powered content generation solutions, like ChatGPT, HR should advise them to be transparent with stakeholders about their use of AI and advise them to move forward with the AI solution only if they’re comfortable with the fact that their marketing materials may not be protected by copyright. 

Trying to follow these laws only after they’ve passed will either constrain your organization from benefiting from AI in the meantime or will take tremendous effort to do retroactively, much of which will fall on HR’s will shoulders. The larger and more complex your organization is, the more important it is to begin laying the foundation for compliant AI processes before formal legislation materializes. 

 

4. Train Employees on AI Compliance

More than any other business function, HR and diversity, equity, and inclusion (DE&I) departments can be the greatest proponents of compliance and the first line of defense against AI misuse in the workplace through employee training. No matter how thorough an organization’s AI policies are, they’re useless unless employees understand how to use the technology correctly. This means everyone uses AI in a way that protects data privacy, prevents bias, promotes transparency, complies with the organization’s ethical code of conduct and any applicable laws, and is monitored by human oversight.  

Implementing AI policies across any organization is a monumental task. A good starting point is to incorporate AI compliance training under existing learning and organizational development (L&OD) initiatives. Once a foundational education framework is established around AI use, HR can update and add more extensive training tailored to departmental use cases as necessary. Per EEOC and DOJ guidelines, it’s especially important to train employees on how to recognize and manage requests for reasonable accommodations when using AI tools. For the best outcome, HR leaders should work collaboratively with legal and other departments, adjust training as new legislation is introduced, and observe how different departments are adapting to AI in real time.

More on the AI/HR ConnectionHow to Prepare for a Job Interview Run by AI

 

Prepare for AI Instead of Playing Catchup

Dealing with regulation is not as stimulating as the exciting productivity improvements and new opportunities AI presents. Establishing policy and governing the ethical use of AI is at the heart of the HR function of fostering a safe and equitable work environment, however. By welcoming AI into the workplace with common-sense policies in place, HR leaders can improve employee productivity and engagement by expanding what’s possible for them to achieve, while also protecting them and prospective employees against discrimination, bias, and privacy issues.

Explore Job Matches.