Ninety-five percent of AI projects fail. We’ve all seen the MIT report by now.
The reasons vary — lack of data discipline, overambitious pilots, change-resistant cultures — but I’d argue one of the biggest is more fundamental. Too many organizations confuse the model with the product.
4 Tips for a Successful AI Implementation
- Start asking the right questions.
- Build the right foundations, not more pilots.
- Equip your people to scale AI.
- Don’t benchmark against mediocrity.
I lead an AI platform that helps infrastructure teams deliver safer, faster, and more predictable field operations through video intelligence and data-driven workflows. When large language models (LLMs) first started infiltrating these organizations, my team and I heard the same question on repeat: “Do we really need a dedicated AI platform? Couldn’t we just take a video from the field and push it through ChatGPT?”
That, right there, is where 95 percent of projects go off the rails.
The Model Is Not the Product
An LLM can generate content and analyze inputs, but that doesn’t make it a product. Treating it as one is like mistaking a car engine for the car. Unless you’re a mechanic or a car aficionado, the car itself is the product.
A productized AI system solves a defined business problem, integrates into existing workflows, aligns with organizational policies and delivers measurable outcomes. The LLM itself is one component in that chain. Powerful, yes, but weak without context.
You can ask an LLM to interpret an image or summarize a document, but it doesn’t know your safety procedures, your compliance obligations, or your operational KPIs. It’s not governed by your company’s risk framework, nor is it designed to function in low-connectivity environments or log evidence for regulators.
That’s the difference between experimentation and engineering. The first produces interesting answers. The second produces results you can stake your business on.
Knowing the difference between an AI model and a product is the easy part. The real challenge comes when figuring out how to embed productized AI into your operations in a way that changes outcomes. These four considerations will help.
1. Start Asking the Right Questions
When people talk about AI, they usually ask, “Will this make us faster?” It’s the wrong question.
The more important one is, “Will this make us better?”
Productivity is easy to measure in hours saved or lines written. Quality uplift is harder to quantify.
Take something as simple as writing. I sometimes use AI tools when drafting content. I’m not convinced they make me faster; in fact, I often spend longer refining ideas. But what I produce is more coherent, targeted, and optimized for reach and relevance.
That’s the paradox of good AI integration. It can increase time on task while improving output quality by an order of magnitude. Measuring only speed misses that entirely.
But for AI to truly make organizations better, they first have to define what better means. For example, in infrastructure environments, quality is about reducing risk and creating certainty. And risk isn’t only about physical harm; it’s anything that undermines your ability to deliver predictably and profitably. A missed step that leads to rework, a communication breakdown that delays a project, a compliance lapse that triggers fines — all of these are preventable risks.
When AI is used well, it helps leaders see those risks early, connect the dots across silos, and act before minor issues compound into costly disruptions. That’s what “better” looks like: safer, smarter, more resilient operations that teams and customers alike can count on.
2. Build the Right Foundations, Not More Pilots
Most companies fail at AI because they never operationalize it. There’s rarely top-down accountability, defined KPIs, or risk tolerance to deploy at scale. They run promising proofs of concept, publish glowing internal memos, and then move on when results plateau, or the initiative never even makes it out of the lab.
That gap shows up in the data. McKinsey’s 2025 State of AI report found that only 21 percent of companies using generative AI have fundamentally redesigned even part of their workflows. This is despite the same report showing the redesign of workflows has the most significant effect on an organization’s ability to see EBIT impact from gen AI.
To escape MIT’s 95 percent failure rate, leaders need to replace pilot thinking with disciplined execution:
- Establish ownership and accountability. Someone must be responsible for outcomes, whether that’s an executive or a cross-functional steering group empowered to make decisions.
- Define meaningful KPIs. Set success metrics that go beyond activity counts or pilot completions. Focus instead on measurable productivity, quality, or profitability gains. Tie executive performance and incentives to those results.
- Build risk tolerance into governance. Establish guardrails early — ethical use, security, data integrity — so leaders feel confident scaling AI beyond controlled experiments.
Without those foundations, even the best model becomes shelfware.
You should also challenge yourself on what a small scale pilot actually achieves. Derisking the deployment of AI is very different to derisking the deployment of some form of SaaS workflow. You need scale and data to drive outcomes and determine the best way forward.
3. Equip Your People to Scale AI
We recently reviewed our competency framework and added AI responsibilities at every level. Everyone from engineers to operations leads now has explicit expectations for how they’ll use, supervise or integrate AI in their work.
That exercise revealed two gaps many companies share:
- Skill depth. Do you actually have people who’ve implemented AI at scale before, or are you experimenting from scratch?
- Role clarity. Who’s accountable for identifying use cases, training teams, and measuring ROI?
Bridging those gaps takes intentional change enablement. Companies that scale AI successfully invest early in upskilling, communication, and internal change agents who can champion adoption across departments. They treat AI literacy as a shared competency.
Some organizations will need a chief AI officer; others can embed those duties within existing leadership roles. What matters is clarity and a culture that understands why AI is being adopted and where it can create the most value.
If you don’t proactively enable your teams with emerging tools, a secret IT stack will emerge inside your business. Left unchecked, those hidden tools create fragmented data, inconsistent workflows, and exposure to security and compliance risks that leaders can’t see or manage.
4. Don’t Benchmark Against Mediocrity
The MIT study also found that in-house AI product builds fail more often than those using proven platforms. That’s no surprise.
Internal builds take time, and that’s, of course, time you can be using to operationalize AI instead. Then there’s the expertise to consider. Do you have the right internal team to build what you need? If you need to hire a team, your timeline just got longer.
The biggest problem I see with internal builds is that organizations end up benchmarking against their own weak data. If your baseline operations are inefficient or the data you’ve collected to inform decision-making is limited, “improving” them by five percent is just polishing mediocrity.
The real leap comes from working with products and partners that draw on cross-industry data and hard-won experience. That’s where you move from incremental improvement to exponential insight.
Buy versus build is a question of perspective. Are you trying to replicate what already exists elsewhere, or are you leveraging what others have already learned so you can leapfrog ahead?
The pace of change can be paralyzing, and many organizations are still waiting for regulation or the “right moment” to act. But standing still is its own risk. History doesn’t remember the companies that waited out industrial revolutions.
The organizations that succeed will design AI for quality, governance and accountability from day one. The real breakthroughs come from what you build around the models.
