If You’re Still Hiring for AI Fluency, You’re Setting Yourself Up to Fail

Too many organizations understand the AI skills gap backwards, focusing too much on the output layer. Most AI projects that fail go astray because of bad data, and that’s where you need to focus.

Written by Ankush Rastogi
Published on Apr. 16, 2026
A flathead screwdriver next to a Phillips-head screw
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Apr 16, 2026
Summary: While most attention focuses on prompt engineering, a critical data skills gap is stalling AI adoption. Organizations overlook the infrastructure needed for reliable data, leading to failed projects. Success requires shifting investment toward data engineering and foundational fluency, not just interfaces.

Spend 10 minutes on LinkedIn and the AI upskilling narrative becomes impossible to miss. Prompt engineering courses with hundreds of thousands of enrollments. Corporate training programs rebranding themselves around ChatGPT fluency. Job postings for “AI Literacy Specialists” at companies that have never shipped a production AI system. The message is uniform: The future belongs to people who know how to talk to AI.

This is a compelling story. It is also pointing organizations in the wrong direction.

The skills gap that is actually stalling enterprise AI adoption, the one that explains why so many AI investments underperform their projections, has almost nothing to do with how well employees can prompt a language model. It is a data problem. And until that realization makes its way into how companies hire, train and invest, the gap will keep widening.

Why Do Most Enterprise AI Projects Fail?

The real AI skills gap isnt about prompt engineering; it is a data infrastructure problem. While organizations focus on AI literacy, research from Gartner and IBM suggests that 60 percent of AI projects will be abandoned this year due to poor data readiness. To succeed, companies must shift investment toward:

  • Data Engineering: Building resilient pipelines and reliable infrastructure.

  • Data Quality: Ensuring inputs are accurate, timely, and well-governed.

  • Upstream Fluency: Training teams to understand data origins rather than just model interfaces.

More From Ankush RastogiHow Much Could Replacing Human Expertise With AI Cost You?

 

The Prompting Obsession and What It Misses

The focus on AI-facing skills makes intuitive sense. Generative AI tools are visible, accessible and easy to demonstrate. A worker who uses AI to draft faster, summarize better or research more efficiently is producing output that is immediately legible to a manager. The skill feels concrete and teachable, and the productivity gains are real.

But these gains represent the output layer of AI, the interface where humans and models interact. Most organizations have treated this layer as the whole problem. What they have systematically underinvested in is everything underneath it: the infrastructure that determines whether an AI system receives reliable, well-structured data; the pipelines that move that data from source systems to models; and the monitoring that detects when those models quietly start behaving in ways nobody intended.

Gartner has warned that many organizations still lack the data infrastructure required to scale AI. In fact, the firm predicts that through 2026, organizations will abandon 60 percent of AI projects that are not supported by AI-ready data. IBM’s Global AI Adoption Index consistently lists data complexity and data quality among the primary barriers to enterprise AI deployment, alongside skills gaps and governance challenges. These are not model problems or prompting problems. They are data infrastructure problems. And they require a fundamentally different set of skills to solve.

 

AI Projects Fail Where Nobody Is Looking

When an enterprise AI deployment underperforms, the post-mortem rarely concludes that employees needed better prompt training. The failure is frequently upstream: data from different systems that doesn’t reconcile. Pipelines that deliver stale or incomplete records to a model that was built assuming freshness. Features engineered on historical data that no longer reflect how the business actually operates. A model deployed into production with no instrumentation to detect when its behavior has drifted.

These failure modes are well documented in academic literature and industry research. McKinsey research consistently highlights poor data quality, siloed data and weak governance as leading contributors to AI initiatives that fail to scale beyond pilot stages. Ventana Research has similarly observed a gap between AI adoption and data readiness. In one analysis, only 20 percent of organizations reported high confidence in their underlying data analysis infrastructure, highlighting the maturity challenges many enterprises face when scaling AI deployments

None of this is surprising to anyone who has worked close to production AI systems. The gap between a convincing AI demo and a reliable AI deployment is almost always bridged — or not — by data engineering. The model itself is often the least complicated part. The hard part is everything else that ensures the model is running on accurate, timely, well-governed inputs and that someone will notice when it stops doing so.

Thriving in the AI EconomyThese Are the AI Skills You Need to Get a Job — or Keep the One You Have

 

The Hiring Market Reflects the Mismatch

The labor market tells the same story from a different angle. Industry hiring reports, including the Dice Tech Salary Report, consistently show strong demand for roles tied to AI, cloud and data engineering, reflecting the growing importance of data infrastructure in modern technology organizations.The shortage is not for lack of need; it’s because the pipeline of qualified candidates is thin and these roles haven’t received the cultural attention that AI-adjacent titles have.

Meanwhile, the initial surge of dedicated prompt engineering roles appears to be stabilizing as organizations integrate those responsibilities into broader AI engineering functions. But the wave of enthusiasm for AI-facing roles still hasn’t been matched by any comparable investment in the foundational roles that make those tools worth deploying.

The result is a structural imbalance inside many organizations. Teams become proficient at experimenting with AI tools, yet the underlying systems that supply those tools with data remain fragile or fragmented. Business units can generate impressive demonstrations and short-term productivity gains, but the reliability of those systems in production depends on infrastructure that has received far less attention.

This creates a predictable organizational pattern: Companies have invested heavily in training employees to use AI tools but haven’t built the data infrastructure to support reliable AI outputs. The tools are running, but the data feeding them is a patchwork. The gap between what the AI promises and what it delivers is quietly blamed on the technology itself rather than the foundation it was built upon.

 

What a Better Investment Looks Like

Rebalancing the AI skills conversation does not mean abandoning AI literacy programs. Employees who are fluent with AI tools are more productive, and that fluency has genuine organizational value. The argument is not against AI upskilling. Rather, it’s against AI upskilling as the primary response to the AI readiness challenge.

A better investment is in data fluency at every level of the organization. Not everyone needs to become a data engineer. But when business teams understand where their data comes from, what assumptions are embedded in it and how AI systems can be quietly misled by data that looks fine on the surface, they become meaningfully better consumers of AI output. They ask better questions, push back more effectively on results that seem off and understand why a model that worked well in testing is behaving differently in production.

For technical hiring, the imbalance is even clearer. The engineers who will deliver durable value over the next decade are not primarily the ones who can configure the latest foundation model. They are the ones who can build and maintain the data infrastructure those models depend on, who understand data validation, pipeline reliability, observability and what it takes to keep a production AI system behaving predictably over months and years rather than just in a launch demo.

More on AI SkillsSo What Exactly Do We Mean by ‘AI Skills’?

 

The Gap Is Real — Just Not Where Everyone Is Looking

Gartner previously predicted that, through 2022, 85 percent of AI projects would deliver erroneous outcomes due to bias in data, algorithms or the teams managing them. The number has shifted, but the underlying dynamic has not. Poor data quality remains one of the most frequently cited reasons enterprise AI initiatives underperform. And poor data is a skills problem, just not the skills problem that is currently receiving the most attention or investment. The good news is that skills problems can be fixed.

Fixing them, however, requires organizations to shift where they focus their attention. The conversation around AI capability often begins at the interface layer, where employees interact with models. But reliable AI systems are built much further upstream. They depend on disciplined data practices, resilient pipelines and engineers who understand how to keep complex systems stable over time rather than just functional in a demonstration.

The companies that will build durable AI advantages are not necessarily the ones whose employees are most fluent in the latest LLM interfaces. They are the ones that treat data infrastructure as a strategic priority, hire for the skills to maintain it and understand that prompting is a thin layer on top of an engineering foundation that either holds or it doesn't.

The AI skills gap is real. Companies are just measuring it from the wrong end.

Explore Job Matches.