The AI revolution is shifting gears. While venture capitalists continue pouring billions into the race for more powerful foundation models, the real value creation is quietly happening on a different layer entirely. The companies winning with AI today aren’t building better LLMs — they’re building better applications that use this new technology.
This shift mirrors every major technology transition, and I should know because I’ve lived through them all. During the early internet boom, the focus was on infrastructure — ISPs, servers, bandwidth. The real money came later with applications like Amazon, Google and Salesforce that put that infrastructure to work solving specific problems. We’re seeing the same pattern with AI, but the transition is happening faster than most realize.
How Does AI Actually Drive Value?
The AI transition mirrors the early internet boom, where value came from applications like Amazon rather than infrastructure like ISPs. Newer models like GPT-5 offer incremental gains but require skyrocketing costs, making them unjustified for most enterprise use cases. Instead, target the application layer by focusing on domain-specific problems, building specialized workflows and integrating AI into existing systems.
AI and the Plateau Effect
The latest generation of large language models shows diminishing returns that should concern anyone betting on raw model improvements. GPT-4 to GPT-5 delivered incremental gains, not revolutionary leaps. In specialized domains like translation, newer models are actually performing worse than their predecessors. Benchmarking data shows GPT-5’s translation quality has stagnated around a Translation Edit Rate of 23, roughly equivalent to basic machine translation from five years ago.
Meanwhile, the costs of training and running these massive models continue to skyrocket. OpenAI’s training costs for GPT-4 reportedly exceeded $100 million. For most use cases, this additional capability simply doesn’t justify the expense. A startup building a customer service chatbot doesn’t need the model that can write poetry in 17 languages. It needs one that understands the company’s specific domain, integrates with existing systems and works reliably at scale.
Where the AI Action Really Is
The companies creating sustainable value today are building at the application layer. Harvey hit $100 million ARR not by training models but by creating specialized workflows for law firms using existing LLMs. Cursor transforms software development by understanding code context and developer workflows. Intercom's AI platform works because it knows how to route conversations and maintain context across interactions.
These companies share a common approach: They treat AI as infrastructure, not the end product in itself. Their value comes from understanding specific problems, building the surrounding systems that make AI useful in solving them.
Let’s look at Harvey more closely as an example: The company didn’t build a new LLM, it took existing models and added the ability to understand legal citations, cross-reference case law, and integrate with document management systems that law firms already use. Ultimately, businesses need to deliver outcomes rather than raw capability.
The Incumbent’s Dilemma
Legacy SaaS companies face an existential choice that will define the next decade of enterprise software. The AI transformation isn’t optional — it’s evolutionary pressure across the entire economy that will separate survivors from casualties.
Three paths have emerged, each with vastly different outcomes. The first is extinction through inaction, displacement by AI-native startups built from the ground up with intelligent capabilities. The second is superficial integration, which one executive I talked to recently described as “jamming bullshit AI” into existing products. These solutions satisfy neither investors nor customers, creating the illusion of innovation while delivering substandard experiences.
The third path is genuine transformation, where incumbents cannibalize their own business models — completely upending product roadmaps and killing legacy products in order to build something new — to build truly intelligent systems. This approach faces intense market skepticism, as we see with Microsoft’s Copilot and Salesforce’s AI initiatives, which often feel like add-ons rather than fundamentally reimagining platforms.
Yet some succeed. Intercom launched Fin, an AI agent built on their existing infrastructure, in 2023. The transformation required repositioning the entire company around intelligent customer engagement rather than simple communication tools. As a result, the business went from five straight quarters of zero net-new ARR, approaching negative growth, to achieving $12M ARR in just 12 months. Oracle’s painful but successful transformation from on-premise to cloud software provides the template: massive disruption followed by market leadership in the new paradigm.
The Infrastructure Trap
Many enterprises — including mine — are learning this lesson the hard way. The executive mandate to “use AI everywhere” has led to expensive experiments in plugging GPT-4 directly into business processes. A recent MIT study found that 95 percent of enterprise AI projects fail, and the pattern is predictable: Companies underestimate the engineering required to make raw AI models work in production.
Building your own LLM application isn’t just about API calls. It requires prompt engineering, evaluation frameworks, integration with existing systems, ongoing monitoring and continuous tuning. Most companies discover this complexity only after investing months of engineering time and significant resources. Purpose-built solutions from application layer companies handle this complexity as their core business, delivering better results at a fraction of the cost of building and maintaining your own system. For example, we tried to develop our own customer service chatbot only to realize the juice wasn’t worth the squeeze, and then signed up with Intercom.
The pattern holds across industries. Generic AI tools require significant customization, ongoing maintenance and domain expertise that most companies don't possess. Application layer companies absorb this complexity, delivering turnkey solutions that integrate seamlessly into existing workflows.
Investment Implications
Smart money is already shifting. While headlines focus on massive rounds for model companies, venture capital is quietly flowing toward application layer startups. These companies have clearer paths to profitability, shorter sales cycles and defensible moats built on domain expertise rather than compute resources.
The application layer also offers better risk-adjusted returns. Model companies face existential threats from tech giants with deeper pockets and more data. Application companies face competition from other specialists in their domain, which is a more manageable competitive landscape.
Consider the economics: Training a competitive foundational model requires hundreds of millions in capital and years of development. Building a specialized application can be done with tens of millions and months to market. The barriers to entry are fundamentally different.
The Path to AI Profitability
This doesn’t mean foundational model development stops. Instead, it becomes a commodity infrastructure layer, like cloud computing or databases. The differentiated value moves up the stack to companies that understand specific use cases, have domain expertise and can deliver complete solutions. As an example, a startup building an AI recruiting tool doesn’t need to train its own model. It needs to understand how recruiters actually work, integrate with applicant tracking systems, and handle the messy reality of resume parsing and candidate matching. Use GPT as the engine, but build the car around it.
For entrepreneurs, the opportunity is clear: Stop trying to build better hammers and start building better houses. The market rewards companies that solve real problems with existing tools more than those that build marginally better tools.
For enterprises evaluating AI strategies, the lesson is equally straightforward: focus on outcomes, not technology. Partner with companies that understand your domain and can deliver measurable results, rather than trying to build everything in-house with generic AI tools.
The AI revolution isn’t slowing down — it’s maturing. And maturity means moving beyond the technology itself to focus on what that technology can actually accomplish. The future belongs to the companies that make AI invisible by making it useful.
