The UK Is Training 10 Million People to Use AI. That’s the Problem.

Instead of pouring money into training for AI tools, what if we built interfaces that didn’t require training in the first place?

Written by Tanya Donska
Published on Feb. 19, 2026
A group of people take a training class on computers
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Feb 18, 2026
Summary: The UK is training 10M workers in AI, but the problem isn’t literacy — it’s bad design. With only 21 percent of workers feeling confident, the $20 monthly cost for tools like GPT-5 often buys frustration. True progress requires intuitive interfaces, not 20-minute courses for poor UX.

The UK is training 10 million workers to use AI. Free courses, 20 minutes each, virtual badge when you’re done. The government is partnering with Microsoft, Google and the NHS. The goal is to make Britain the fastest AI-adopting country in the G7. 

They’re teaching workers how to use ChatGPT to draft text and how to use AI for admin tasks.

What they’re not covering, however, is why these tools need such training in the first place.

Only 21 percent of UK workers feel confident using AI according to the government’s own research. That’s not a training problem. That’s a design problem. Users try the tools, struggle with the interface and give up. Then companies blame “resistance to change” instead of their design.

You can’t train your way out of bad UX. Apparently, though, you can spend millions trying.

Why Is AI Adoption Stalling Despite Increased Training?

AI adoption is failing primarily due to poor user experience (UX) rather than a lack of worker capability. While governments and companies invest millions in so-called AI literacy courses, research shows that most users give up on AI tools because of confusing interfaces and cluttered designs. When a product requires a training manual to be usable, it is a sign of design failure, not a lack of user skill.

More on AI + DesignThe Next Revolution in AI Design Won’t Be an Interface

 

Why AI Adoption Is Actually Failing

The bottleneck isn’t capabilities. It’s UX.

ChatGPT-5 launched in August. Although the new model is technically more capable than GPT-4, it’s also harder to use. Was that progress?

The interface didn’t evolve with the capabilities. Power users were frustrated while new ones were overwhelmed.

We can see the same pattern everywhere. Voice AI can’t understand basic requests. AI copilots interrupt more than they help. Useful features get buried where nobody finds them. Every company that’s building AI is focused on making it smarter. Unfortunately, nobody’s making it usable.

Users try an AI tool. They struggle to figure out the interface for five minutes. Eventually, they just give up. 

On the company side, the product team sees low adoption. The team calls the problem a “resistance to change” on the users’ part. It builds an ineffective training program. Nobody ever fixes the interface.

Product teams ship new AI features without considering if users can find them. Engineering leads choose AI tools their teams won’t use. Companies roll out AI internally, all the while wondering why adoption stays at 15 percent. Then they spend more time and resources building courses to explain what good design would have made obvious.

 

Training Is a Band-Aid for Design Failure

When products need training programs, that’s design failure.

Imagine needing a 20-minute course to use Google Search. Embarrassing. But we’ve normalized this state of affairs for AI tools. Products ship with interfaces nobody can figure out. Then companies build training programs instead of fixing the interfaces.

Deutsche Telekom had an internal data hub. Multiple national companies used it. In fact, its use was mandatory. Despite that mandate, the adoption rate was 25 percent. Excellent mandate.

Teams were filing tickets to get data manually instead of using the tool. IT was running those manual queries. Project decisions were delayed by days. Nobody trusted the platform they’d spent millions building.

The problem wasn’t capability. It was the interface. Data scientists couldn’t find basic features. Critical reports were buried three levels deep. So teams gave up and filed tickets instead.

We rebuilt the navigation and brought critical features to the surface. The most-used reports were now one click away. Adoption hit 68 percent in three months. No training required.

Good design makes training unnecessary. Conversely, training makes bad design expensive.

 

You’re Normalizing Bad UX at Scale

Train 10 million people to accept bad UX, and all you’ve taught them is that AI tools are supposed to be confusing. Now, nobody expects better.

Product teams have no incentive to improve. The axiom that “Users just need more training” becomes acceptable. The bar drops for the entire industry. We’ve started a race to the bottom, and someone will win it.

The cycle that emerges is this: 

  • Ship a confusing AI product. 
  • Users struggle to use it. 
  • Build a training program to help users. 
  • Users learn to work around bad design. 
  • Never fix the design. 
  • Repeat with the next feature.

Every hour teaching people to struggle through bad interfaces is an hour not spent making interfaces that don’t need explanation. Training costs repeat. Design fixes don’t. That’s not money spent on training. It’s the real cost of inaction on your design problem.

Remember 300-page software manuals? We evolved past that. Except now, with AI, we’re regressing back to that earlier state. Calling the same process “AI literacy doesn’t make it less embarrassing.

 

The Gap Everyone’s Ignoring

Every country is training AI engineers. Everyone is competing on capabilities, chasing speed benchmarks, accuracy metrics, seeing who can build the most powerful models.

Nobody is competing on usability. Building AI products humans can trust. Interfaces users can understand. Workflows that feel natural.

Engineers know how to build AI that works. Nobody knows how to build AI that feels like it works. Technical capability means nothing if users can’t access it.

ChatGPT’s interface hasn’t fundamentally changed since launch despite massive capability improvements. Voice AI loops endlessly when it doesn’t understand the user. Copilots guess wrong and force users to fix mistakes. Every AI tool is optimized for demo, not daily use.

The UK should be funding research on making AI tools that don’t need courses, not courses on using confusing tools.

 

Real Examples of AI Design Confusion

ChatGPT-5 is more powerful than GPT-4. Yet it still has the same cluttered interface, no effective way to organize conversations and a search feature that doesn’t find what you asked it to remember. Users pay $20 per month to be frustrated by their own chat history.

Voice AI can transcribe anything. Then it keeps listening. You finish your thought and wait for it to process. It’s still listening. You say “um” while thinking. It adds “um” to your transcript. It makes you speak like you’re dictating to a machine that can’t tell when you’re done thinking versus when you’re done talking. It’s training users to accommodate bad design.

AI copilots promise to help users. You’re writing a function. Copilot suggests code you didn’t ask for and interrupts your flow. You ignore it and keep typing. It suggests again. You stop, verify the suggestion. Wrong. You fix it yourself. Now you’re slower than before you had help. That’s the kind of help it offers.

But everyone uses them because we’re chasing “AI adoption.”

These aren’t edge cases. These are the products the UK is spending millions to teach people to struggle through.

Do You Really Need AI?3 Hard Questions You Should Ask Before Adding AI to Your Product

 

The Paradox of Training Around Bad Design

I have a UK Global Talent visa in software design, awarded to attract government-certified exceptional talent in my field. That same government is now spending millions training people to struggle through products that shouldn't require training.

Training 10 million people is easier than fixing the interfaces. That’s a choice. The wrong choice, but a choice nonetheless.

The actual solution: stop treating UX as an afterthought. Build AI products with superior usability from the start. Test with actual users, not demos. Make “understandable” a requirement, not a nice-to-have. Measure success by how little explanation users require.

Every other country will make the same mistake. They’ll train their workers, build comprehensive AI literacy programs, spend millions and wonder why adoption stays low.

Whoever actually fixes the UX will win. Not because they have better AI. It will be because they have AI people can actually use.

That’s not an AI problem. That’s a design problem. And you can’t train your way out of it.

Explore Job Matches.