These Job Seekers Say AI Unfairly Blocked Them From Work. Now They’re Suing.

A new class-action suit will test whether AI hiring platforms must give job seekers access to the data used to evaluate them, bringing transparency to a sector defined by proprietary data and opaque algorithms.

Written by Jeff Rumage
Published on Feb. 11, 2026
A recruiter uses AI to sort through applicants.
Image: Shutterstock
REVIEWED BY
Ellen Glover | Feb 11, 2026
Summary: A class-action lawsuit alleges Eightfold AI violates the Fair Credit Reporting Act by evaluating job applicants using data they can’t see or dispute. The case could determine whether AI hiring platforms must be regulated like background checks.

A proposed class-action lawsuit filed against AI hiring platform Eightfold has brought new scrutiny to the growing role of artificial intelligence in recruiting

Eightfold uses millions of data points to assess whether a candidate is likely to be successful on the job, but according to the complaint, it does not allow applicants to see, correct or dispute the accuracy of the information used to evaluate them, which it claims violates the law. The suit is seeking to uncover exactly what data Eightfold uses and how its algorithm determines an applicants’ job compatibility.

What Is the Eightfold AI Lawsuit About?

The lawsuit against Eightfold AI alleges the company uses data from LinkedIn, job boards and applicants’ internet activity to assess applicants’ likelihood for succeeding in a role. Plaintiffs claim this violates the Fair Credit Reporting Act because applicants cannot access or correct the information used to evaluate them.

This lawsuit has caught the attention of recruiters everywhere, as more talent acquisition teams are turning to AI to sort through an unmanageable number of AI-optimized resumes. More than half of employers use artificial intelligence in their recruiting efforts, according to a Society of Human Resources Management study, and 93 percent of recruiters surveyed by LinkedIn say they plan to increase their usage this year.

If the lawsuit is successful, it could pressure companies to be more transparent about their data collection practices, as well as encourage recruiters to be more cautious in their AI usage. For job seekers, it could bring more visibility to a hiring process that feels increasingly opaque, impersonal and consequential to their livelihood.

 

First, How Is AI Used in the Hiring Process?

AI is used at nearly every stage of the hiring process, from sourcing candidates to analyzing their resumes. Job seekers often blame these tools — particularly AI-powered applicant tracking systems (ATS) — for rejecting them, sometimes within minutes of submitting an application, or in the middle of the night when a recruiter is unlikely to be working.

Recruiters, for their part, claim that ATS systems don’t have the authority to automatically remove a candidate from consideration, and they often downplay their use of applicant-ranking features. At the same time, AI has enabled job seekers to churn out optimized resumes faster than ever before, forcing recruiters to increasingly turn to AI for help with sorting through the hundreds of applications they may get for a single role.  

Whether their application is auto-rejected, pushed to the bottom of the pile or simply ignored, many candidates suspect AI is to blame. In a survey of more than 1,000 job seekers, employment platform Monster found that 77 percent worry their resume is being filtered out before it ever reaches a human. 

This sentiment is at the heart of the Eightfold lawsuit, which was filed on behalf of two California job seekers who have applied unsuccessfully to numerous jobs at Microsoft, Paypal and other companies that use Eightfold’s software.

“I’ve applied to hundreds of jobs, but it feels like an unseen force is stopping me from being fairly considered,” one of the plaintiffs, Erin Kistler, said in a press release. “It’s disheartening, and I know I’m not alone in feeling this way.”

Related ReadingThese ChatGPT Prompts Will Fast-Track Your Job Search

 

What the Eightfold Lawsuit Alleges

The lawsuit claims that Eightfold collects data on job applicants without their knowledge or consent, and without giving them the chance to review or revise the information. Eightfold has said that its “proprietary global data set is the world’s largest, self-refreshing source of talent data,” encompassing the profiles of more than 1 billion workers. This includes data from career sites, job boards and LinkedIn, according to the company.

The lawsuit, which was filed by Outten & Golden and nonprofit law firm Towards Justice, argues that the data collection and scoring practices used by Eightfold are functionally equivalent to conducting employment background checks, which are regulated by the federal Fair Credit Reporting Act. Under this law, any company that compiles these reports must share them with the individual and allow any inaccuracies to be challenged.

Although AI-powered hiring software did not exist when Congress passed the Fair Credit Reporting Act in 1970, the lawsuit claims that Eightfold’s technology raises the same risks the law was designed to prevent. In fact, in 2024, the Consumer Financial Protection Bureau specifically stated that employment background dossiers and algorithmic scores used in hiring decisions were subject to the Fair Credit Reporting Act. That guidance was rescinded last year under President Donald Trump’s administration, but the lawsuit asserts that hiring assessments like Eightfold’s should be made to comply.  

“Qualified workers across the country are being denied job opportunities based on automated assessments they have never seen and cannot correct,” Jenny R. Yang, partner at Outten & Golden LLP and former Chair of the U.S. Equal Employment Opportunity Commission, said in a press release. “These are the very real harms Congress sought to prevent when it enacted the FCRA. As hiring tools evolve, AI companies like Eightfold must comply with these common-sense legal safeguards meant to protect everyday Americans.”

If the plaintiffs win, Eightfold would have to come into compliance with the Fair Credit Reporting Act and give applicants the opportunity to dispute or correct the data used to evaluate them. But Pauline Kim, an employment law professor at the Washington University School of Law, told Tech Brew that the law is limited in scope and would “provide only limited transparency — likely not enough to ensure the fairness of these systems.”

Related ReadingHow AI Changed the Job Market in 2025

 

AI’s Potential for Hiring Bias

Job seekers are calling for protections against AI hiring tools like Eightfold over concerns that these systems can make unfair hiring decisions. Eightfold, like most AI tools, does not publish its specific data sources, nor does it reveal the inner workings of its proprietary algorithm. But this  technology largely exists in a so-called “black box,” so it’s nearly impossible to figure out how it’s reaching conclusions or whether those judgments are rooted in bias.

Discrimination is a major concern, as this technology has the potential to amplify biases in its training data or infuse new biases through a poorly designed algorithm. For example, in a 2024 study of three large language models, researchers at the University of Washington found that the LLMs preferred white-associated names 85 percent of the time and Black-associated names just 9 percent of the time. The models also preferred male-associated names 52 percent of the time and female-associated names only 11 percent of the time. 

Eightfold is just one of many AI hiring platforms that have been challenged in recent years. In 2019, a public interest group filed a complaint with the Federal Trade Commission alleging HireVue discriminated against applicants with “opaque algorithms” and face-scanning technology. The company has since stopped using facial recognition. More recently, Workday was sued on allegations that its algorithm discriminates against applicants older than 40. That case is ongoing.

Existing employment laws, like Title VII of the Civil Rights Act of 1964, prohibit discrimination regardless of the technology used, but several states and cities have taken additional measures to address artificial intelligence specifically. Under New York City Local Law 144, employers who use AI to make hiring decisions must publish an annual audit of the tools they use and notify candidates of their use. A Colorado law requires AI developers to “use reasonable care” to protect consumers from “reasonably foreseeable risks” of algorithmic discrimination. Meanwhile, Illinois law bars employers from using AI in ways that discriminate against job candidates, and mandates that applicants be notified when AI is involved in hiring decisions.

Related ReadingIs AI Actually Killing Jobs? These Senators Want Answers.

 

What Does Eightfold Say?

Eightfold says its software takes measures to mitigate bias, such as masking certain identifying information and flagging potential biases. 

Furthermore, the company has touted its ability to improve diversity in the hiring process by evaluating a candidate’s skills and potential for a role — an inclusive practice known as skills-based hiring. Instead of matching keywords like job titles and job descriptions, Eightfold claims its algorithm infers how the abilities of underrepresented candidates could be relevant to a different role. 

As for the lawsuit specifically, Eightfold contends it mischaracterizes the company’s data collection practices.

“Eightfold does not ‘lurk’ or scrape personal web history, social media or the like to build ‘secret dossiers,’” a company spokesperson told HR Brew. “Eightfold’s platform operates on data submitted by candidates to our customers or provided by our customers.”

Frequently Asked Questions

Eightfold is an AI-powered hiring platform that analyzes millions of data points to rank applicants' likelihood of being successful in a position. Used by companies like Microsoft, PayPal and Morgan Stanley, Eightfold says its proprietary data set includes the profiles of more than 1 billion workers.

According to the lawsuit and Eightfold's privacy policy, the platform collects data from public sources like career sites, job boards, resume databases and social profiles including LinkedIn. The lawsuit also alleges Eightfold collects applicants' location data, internet and device activity, cookies and other tracking data. Eightfold disputes these characterizations, saying its platform operates on data submitted by candidates or provided by customers.

 

Existing employment laws, like Title VII of the Civil Rights Act of 1964, prohibit discrimination regardless of the technology used. Several states and cities have passed additional measures: New York City Local Law 144 requires employers to publish annual audits of AI hiring tools and notify candidates they're being used. Colorado requires AI developers to “use reasonable care” to protect against algorithmic discrimination. Illinois prohibits employers from using AI to make discriminatory hiring decisions and requires notification when AI is used.

Explore Job Matches.