Why Are Employees Pushing Back on Workplace AI?

Pushback against AI is often chalked up to a fear of change or new technology, but employees are also raising valid concerns that need to be understood and taken seriously.

Written by Jeff Rumage
Published on Nov. 26, 2025
A human hand and a robotic hand tug on opposite ends of a rope.
Image: Shutterstock / Canva
REVIEWED BY
Ellen Glover | Nov 24, 2025
Summary: Employees are resisting using AI in the workplace, driven by fears of job loss, distrust of accuracy, environmental concerns and worries about skill erosion. Experts say leaders can ease tensions with clear goals, realistic use cases and a culture of experimentation and open dialogue.

The introduction of generative AI in the workplace has sparked a tension between leaders who want to capitalize on productivity gains and employees who have — for one reason or another — chosen not to adopt these tools. While some workers have used them performatively or struggled to find an application for them, others have been more vocal about their outright rejection of artificial intelligence.

In fact, 64 percent of American adults plan to avoid using AI for “as long as possible,” according to a recent Gallup report. One study found 31 percent of employees (and 41 percent of Gen Z workers specifically) are actively working against their company’s AI initiatives. And 45 percent of CEOs say most of their employees are resistant or openly hostile to AI, according to another study.

Why Do Employees Resist Using AI at Work?

Employees may resist AI due to fears of job loss, concerns about accuracy, environmental considerations or simply uncertainty about how the technology fits into their workflow.

AI resistors have been making headlines recently, sharing their concerns about the technology’s many limitations, the inefficiencies caused by its errors and the risk of becoming overly reliant on it. Many of these workers have been reluctant to share their concerns publicly, as they believe they will be labeled as obstructionists who are fearful of change and new technologies. 

Ironically, several of the people interviewed in these articles come from a technical background. One resistor, a 36-year-old software engineer, told The Washington Post that he was afraid of being labeled a Luddite for warning about AI exposing sensitive data, the environmental impact of data centers and the time it takes to correct all of AI’s inaccuracies. Many of these fears should be shared by workers and executives alike, but they are often drowned out by all the excitement around AI’s potential for productivity, efficiencies and innovation — and an ongoing anxiety over being left behind in the new AI-optimized economy.

“Resistance is a signal. It is information about the ways in which a product rollout, procurement process, or mandate is failing to address or even account for lived realities,” Bob Hutchins, founder and CEO of Human Voice Media, told Built In. “If a frontline worker who uses their hands for a living tells you that a tool they were asked to use makes their work worse, they may be right. The least you can do is hear them out before you challenge them.”

Related ReadingAnti-AI Explained: Why Resistance to Artificial Intelligence Is Growing

 

Why Workers Resist Using AI

As artificial intelligence plays a greater role in our lives, resistance to it has increased in both scope and scale. Half of the roughly 5,000 respondents in a 2025 Pew Research study were more concerned than excited about AI — a 13-point increase from just four years ago. Those who said AI carries a high societal risk were most worried about people losing the ability to think creatively, form meaningful relationships, solve problems and make difficult decisions. Respondents were also concerned about the impact AI-generated inaccuracies or disinformation could have on society, and whether researchers would lose the ability to control AI’s evolution.

These are just a few of the many reasons why people might resist or oppose AI on a societal level, but there are numerous other concerns that shape how they choose to use (or not use) it in their day-to-day lives. Below, we break down some of the largest drivers of AI resistance in the workplace specifically.

AI Could Eliminate Jobs

One of the most commonly cited reasons for resisting AI is job loss — people are worried about AI replacing them. By integrating artificial intelligence into their workflow, workers believe they could be training it to take over their duties. And if a higher-up sees that these systems can perform those duties successfully, they may employ fewer people to oversee that work. By opting out of AI tools, workers hope managers and executives will realize why humans are still necessary.

Executives have warned that AI will replace certain jobs, concerning workers and lawmakers alike. Sixty-five percent of people are anxious about AI replacing their job, according to one study, and the availability of entry-level work has already dropped 13 percent since ChatGPT’s rollout in late 2022. 

A job is never just about a paycheck, though; It’s also tied up in how we see ourselves and our purpose. Experienced workers have spent years, if not decades, developing their skills, and they’ve developed a sense of pride in their craft. So it’s understandable why they might feel offended by the notion that AI could automate those same tasks. Even if AI doesn’t take their job, the very idea is enough to make workers question what purpose they serve in a labor market that no longer seems to value their human touch and ingenuity.

“The people most hesitant to adopt AI are the ones who have often spent upwards of a decade or two perfecting a knowledge domain that a generative tool now purports to have ‘mastered’,” Hutchins said. “Their fears of diminished accuracy, questions of originality, and potential for cognitive atrophy are not grounded in a theoretical possibility of that atrophy occurring, but in firsthand observation of colleagues gaming the system and leveraging copy-and-paste to replace substantive effort.”

AI’s Limitations Cause Inefficiencies

Another widespread complaint about AI tools is that they are limited in their abilities, and that workers find them ineffective, inefficient and a hindrance to their day-to-day work.

The most commonly cited limitation is its propensity for generating inaccurate information. Chatbots may provide a confident answer to your question, but it may have also misinterpreted your prompt, pulled from questionable data sources or drawn false conclusions from training data. And if it doesn’t have accurate information, it may hallucinate or make up information altogether. These inaccuracies should worry executives and employees alike. If AI makes an error associated with your company’s brand, it could lead to a PR disaster, a lawsuit or a loss in customers’ trust.

Worse yet, these limitations and errors may cause more work for employees downstream, nullifying any potential productivity gains. Software engineers told The Information that they spend more time debugging code than they would have spent writing the code themselves. And if those bugs aren’t caught, they can lead to bigger issues that will take even more time to correct. They say AI coding tools are also limited in that they are unable to develop complex algorithms or analyze code within the context of the larger code base.

Overreliance on AI Could Lead to Cognitive Deskilling

Yet another major concern is that handing off work to AI will weaken people’s ability to perform those tasks on their own, similar to a muscle atrophying from lack of exercise. This concept, known as cognitive deskilling, posits that skills like writing or decision-making will grow dull if they’re routinely outsourced to AI.

Indeed, Michael Gerlich, a professor at the Swiss Business School, found that those who offload cognitive tasks to artificial intelligence have a diminished ability to critically evaluate information, discern biases and engage in reflective reasoning. But he also told The Big Think that, when used for brainstorming and innovation, AI can increase users’ critical thinking skills.

Deskilling seems particularly concerning for Gen Z college students and professionals who are hungry to learn new skills and prepare themselves for a future career. When Gen Z career platform Handshake asked college seniors why they didn’t use AI for a task, most respondents said they think it’s important to do it themselves. And 49 percent of Gen Z respondents in a Gallup study said they were worried about AI corroding their ability to think critically.

In an essay for The Atlantic, New York University professor Kwame Anthony Appiah argued AI can help workers focus less on grunt work and develop higher level skills. Instead of writing code, for example, software developers can spend more time checking for logic errors and catching edge cases. Or, instead of crunching numbers, accountants can focus on tax strategy and risk analysis. But he also warned that our relationship with AI could change the way we think, leading to shallower conversations, less room for ambiguity and the “quiet substitution of fluency for understanding.”

AI Usage Has a High Environmental Cost

As if inaccurate information, widespread job loss and the erosion of human skills isn’t enough, many are frustrated to learn that it’s all being done at our environment’s expense

Data centers already consume 1 to 2 percent of the world’s electricity, and some predict this demand could increase to as much as 160 percent by 2030. Researchers at Cornell University estimate that, by 2030, AI data centers in the U.S. could consume between 731 million and 1.1 billion cubic meters of water per year, equaling the annual household water usage of 6 to 10 million Americans. In that same time period, they could also emit 24 to 44 million metric tons of carbon dioxide into the atmosphere, which is the equivalent of adding 5 to 10 million cars to U.S. roadways.

For employees who care about sustainability, being part of a process that contributes to such large-scale environmental damage can make them feel conflicted and hesitant to fully embrace AI.

Related ReadingWe’re Building AI Data Centers In the Wrong Spots — These States Make More Sense

 

How Leaders Can Address AI Resistance

While companies may be eager to boost productivity and innovation with AI, simply forcing employees to adopt it will likely end in failure. After all, workers’ wariness isn’t irrational, as evidenced by an MIT Media Lab study revealing 95 percent of AI pilot programs fail to produce measurable cost savings or profit increases. The successful 5 percent can be attributed to a strong leadership that understands what employees need in order to experiment, adapt and eventually trust the technology.

Below are some ways leaders can address employees’ resistance to AI — and build a workforce that’s ready to use it with confidence.

Be Clear About the Vision — and the ‘Why’

When a new technology is introduced without clear guidance — whether through vague messaging or rushed rollouts — employees will inevitably feel hesitant and confused, which may come off as pushback. Artificial intelligence is no different. 

“A lot of things are getting labeled as employee resistance right now, but when you dig beneath the surface of that a little bit, it's often not actually resistance,” Heather Conklin, CEO of leadership coaching firm Torch, told Built In. “People are getting very unclear communications about what even to do with AI, let alone the ultimate goal the company is working toward.”

To avoid this, leaders should provide a detailed strategy outlining the intended purpose of AI, the specific problems it’s meant to solve and how it will connect to the broader business goals. They should also be transparent about when, how and why employees are expected to use it. 

Don’t Force AI Where It Doesn’t Fit

Automation won’t improve every task, and pretending otherwise will only heighten resistance. Still, many companies fall into what Conklin calls “check-the-box” thinking, requiring employees to use AI even when it might not make sense to do so. Instead, leaders need to take a clear-eyed look at where AI actually helps and where it doesn’t, and implement the technology accordingly. That transparency will go a long way in getting employees on board.

“Before you mandate a tool, be honest about whether it will actually add value and acknowledge the places where it will not,” Hutchins said. “If adopting AI means adding time for an additional review step, add that to the equation. Don’t claim the productivity boost when it’s an illusion.”

Build a Culture Rooted in Experimentation

Real progress with AI doesn’t come from top-down mandates, it comes from innovation and experimentation, where employees can try things out and see what actually works. 

“You’re not trying to build a culture of adoption of AI,” Nick Kennedy, a partner at tech consulting firm West Monroe, told Built In. “You’re trying to build a culture of innovation. You’re trying to build a culture of continuous improvement. You’re trying to build a culture of testing and learning.”

For that to happen, employees need to feel safe asking questions, testing ideas and admitting when something isn’t working. When teams feel free to experiment without the pressure to be perfect, it becomes much easier to spot AI uses cases that genuinely help and drop the ones that don’t. And when employees feel like they’re the ones shaping the new workflows, AI stops feeling like a threat and becomes a tool they can actively influence and improve, leaving them “empowered, not endangered,” as HR industry analyst Josh Bersin, founder and CEO of the The Josh Bersin Company, put it.

“Invite your people into the process. Let them know you want their ideas — that you want to co-design the future together,” Bersin told Built In. “The priority should be communication, education, and giving employees real agency in the AI transition — not replacing them, but involving them. That reduces fear and taps into people’s natural drive to learn and adapt.”

Be Open-Minded About Employees’ Concerns

When employees inevitably raise concerns about AI causing inaccuracies, inefficiencies or unoriginality, leaders should lean in and learn more about the experiences by hiding those concerns. Dismissing them, Conklin warned, means they’ll not only miss out on valuable information, but also “lose the ability to have those open conversations that would lead to real value creation with AI.” 

As much as possible, leaders should break down the problems employees are facing, consider the goals they’re trying to achieve and assess whether AI is an appropriate tool for the job. Of course, not every issue can be solved — a single organization cannot fix AI’s carbon footprint, for example — but acknowledging challenges and taking steps to address them shows employees their concerns matter.

At the end of the day, AI adoption is rarely a smooth, linear process — it’s a period of uncertainty, adjustment and learning for everyone. Employees don’t need blind enthusiasm or pressure; they need honesty, clear expectations and guidance as they adjust to these new ways of working.

“Change is coming with AI — maybe not always at the scale we predict, but real change nonetheless. We need to be honest about that,” Bersin said. “At the same time, it's important to strike a balance: clear performance expectations paired with meaningful support. A certain amount of productive discomfort can be healthy — it’s often what drives learning, adaptation and growth.”

By acknowledging this tension and trying to balance it out with support, leaders have the power to turn AI into something employees can actually feel excited about, not something they fear or resist. 

Frequently Asked Questions

Risks include low-quality outputs, inefficiencies if AI is misapplied and employee pushback. AI also has a significant environmental footprint due to the energy and water consumption and carbon emissions from data centers.

Not always. In fact, research shows most AI pilot programs fail to generate measurable value. 

Leaders can increase AI adoption by communicating clearly with employees, involving them in designing AI workflows, providing training and support and being honest about where AI adds value — and where it doesn’t.

Explore Job Matches.