What Workplaces Miss When They Panic About AI Tools

When businesses panic about the way their employees use AI, they miss valuable signals about what people actually want.

Written by Marie Achour
Published on Jul. 09, 2025
A man in a suit with his head covered by a dark cloud that says AI
Image: Shutterstock / Built In
Brand Studio Logo
Summary: Some employees now use AI to complete workplace training, revealing not laziness but a mismatch in learning design. When integrated intentionally, AI can boost engagement, critical thinking and equity in learning — turning compliance tasks into opportunities for collaboration and growth.

A few months ago, our team at Moodle ran a survey exploring how people are using AI at work. The headline that grabbed attention? Some employees (20 percent of those surveyed) were using AI to complete mandatory workplace training. Depending on where you sit, that sounds either wildly efficient or slightly alarming.

Plenty of outlets chose the second. And, although I agree with the concern, I also notice the underlying curiosity, willingness to experiment and appetite to try new tools that this behavior demonstrates.

Yes, there are pitfalls when AI is introduced into daily workflows carelessly — especially in areas like training, communication and decision-making. But if we approach AI with the right mix of transparency, autonomy and design, it becomes a catalyst for learning, not a threat to it.

More on AI Use in the WorkplaceIs ‘Shadow AI’ Putting Your Business at Risk?

 

The Trouble With Traditional Training

Training is a staple of professional development, but its delivery hasn’t evolved much. In our research, 80 percent of employees had taken part in mandatory training, but just 44 percent fully engaged. Some muted the videos. Others sped through. And some had AI assist them in answering questions.

These behaviors may read like resistance, but they actually highlight a failure of design. People aren’t avoiding learning. They’re avoiding systems that don’t respect their time, knowledge level or preferred way of working.

 

Let’s Talk About AI and Training

Rather than seeing AI as a cheat code, we should be asking why employees are reaching for these tools in the first place. What does it tell us about how they want to learn?

If workers are using AI to summarize dense training modules or find quick answers to compliance quizzes, that’s not laziness — it’s a signal that our current systems aren’t matching how people engage, retain or apply information today.

This is a design problem. And we can solve it.

What if training programs didn’t just allow AI use, but taught people how to use it well?

Imagine a workplace module that includes AI-assisted research, but also asks the learner to critique the result. Or a session that uses an AI-generated scenario as a starting point for team discussion: What’s missing from this output? How would you improve it? What assumptions is the AI making?

You could build in explainability moments. Have learners pause a session to explore how an AI tool reached its conclusion or flag when the model might be confidently wrong. These are small shifts, but they change the role of the employee from passive consumer to critical evaluator.

In peer-based exercises, teams might generate AI-assisted summaries of a module, then compare and refine their drafts together. Or they could use generative AI to draft a new compliance policy and then critique each other’s versions for tone, accuracy and risk.

Another option: give learners AI-generated FAQs and ask them to identify which responses are helpful, misleading or lack context. These kinds of activities don’t just build technical discernment. They encourage reflection, spark conversation and raise assumptions that might otherwise go unspoken.

Transparency is still crucial. People should always know when AI is being used and where their data goes. But just as important is making space for reflection and response. That’s what builds real literacy.

 

Human Connection Is Still the Killer App

AI can generate content and automate tasks. But it can’t replace what makes learning stick: relevance, reflection and relationships.

The best workplace learning environments, AI-powered or not, are the ones where people feel seen and supported. That means creating space for conversation: between team members, between the learner and the content, and yes, even between humans and machines.

It also means designing experiences that invite collaboration, not just between colleagues, but across modes of thinking. One person might use AI to kickstart an idea, while another builds on it, questions it or adds lived experience the model could never access. That interplay is where true learning happens.

The real question isn’t whether employees should use AI in training. It’s how we can help them use it more critically and effectively.

 

Designing for Digital Fluency — With an Equity Lens

Supporting AI literacy in the workplace isn’t just about offering tool tips or best practices. It means helping people build real fluency: understanding how AI systems work, recognizing where they fall short and learning to evaluate their output with a healthy mix of skepticism, curiosity and an equitable mindset.

3 Ways to Ensure Equity in AI Programs

  • Bias audits.
  • Content checks.
  • Inclusive feedback loops.

Bias in AI isn’t abstract. Algorithms frequently reflect historical and systemic inequities, from facial recognition that misidentifies darker-skinned individuals to language models that echo stereotype-laden data. Teaching people to use AI responsibly means training them to spot those biases. That could look like any of the following.

Bias Audits

Ask learners to test AI-generated content for stereotype-driven assumptions, possibly by feeding the model profiles from diverse backgrounds and spotting inconsistent responses.

Content Checks

Use AI tools to scan training materials for exclusive language or lack of representation and then workshop revisions as a group.

Inclusive Feedback Loops

After sharing AI outputs, invite colleagues with different perspectives to reflect on whether the content feels fair, complete and culturally aware.

Embedding these activities into learning not only builds digital and critical fluency, it also fosters a culture of inclusion and accountability. Those who learn to question an AI output are better prepared to question their own assumptions and to advocate for equity in technology and beyond.

More on TrainingWhy Education at All Levels Will Create the Foundation for Data Literacy

 

Moving From Experimentation to Intention

The fact that people are already using AI to navigate workplace learning isn’t a failure. It’s a signal that they’re curious, resourceful and ready to engage on their own terms. As we’ve seen, this opens up enormous potential: not just to improve training outcomes, but to build the kinds of critical, inclusive and reflective habits that modern workplaces need.

Now it’s on us to move from experimentation to intention. We need to design AI-integrated learning that’s transparent, context-rich and grounded in digital fluency. Let’s stop designing for compliance alone and start designing for compliance learning that drives curiosity, collaboration and smarter conversations between AI and humans.

Explore Job Matches.