Perhaps no industry has been hit harder by artificial intelligence than software-as-a-service (SaaS), a cloud-based business model where companies deliver software products to users over the internet on a subscription basis. Fears over AI making software obsolete have erased $1 trillion in SaaS stocks so far, leading Mistral CEO Arthur Mensch to predict that AI could replace more than half of the software currently used by enterprises. These dramatic market shifts have led to warnings of an impending “SaaSpocalypse,” which could gut demand for standalone tools, trigger mass consolidation and force vendors to completely rethink how they build and price their services.
What Is the “SaaSpocalypse”?
The “SaaSpocalypse” refers to the potential end of the software-as-a-service business model due to the widespread adoption of artificial intelligence, particularly agentic AI. As AI agents trim headcounts and reduce teams’ reliance on third-party software, SaaS providers may need to pivot from their usual per-seat subscriptions and instead price their plans based on the actual value their products deliver to remain competitive.
Driving this push for automation are AI agents, or AI systems that can autonomously complete complex, multistep tasks without requiring strict rules or human intervention. While agentic AI threatens to upend the existing SaaS landscape, the technology’s shortcomings present opportunities for SaaS providers and the professions that rely on their services to play a vital role in ushering the sector into the age of AI.
“AI isn’t going to trigger a ‘SaaSpocalypse’ so much as a ‘SaaSmorphosis,’” Richard Johnson, a future of work economist and Built In expert contributor, told Built In. “They both can coexist. However, the ‘S’ in SaaS that changes isn’t the software but the service.”
How Are Enterprises Using AI Agents Today?
Agents are quickly becoming widespread in enterprise AI, or the practice of integrating artificial intelligence into business processes and scaling them to address problems faced by large organizations. According to a 2026 survey by Databricks, the use of multi-agent systems spiked by 327 percent over a four-month period, and 78 percent of companies report using at least two large language model families.
Breaking down the data by use case, businesses now build 80 percent of their databases with AI agents. Other common agent tasks include gathering market intelligence, conducting predictive maintenance and organizing customer support inquiries. In fact, 40 percent of the top use cases are related to customer experience and engagement.
With organizations increasingly embracing teams of agents, more complex processes are now being handled by AI. In particular, backend processes could soon become automated, thanks to rapid advances in coding tools that have spooked the SaaS sector.
Why Are There Fears That Agents Could Disrupt SaaS?
After years of courting individual users for one-off tasks and more creative use cases, AI companies have turned their attention to enterprise customers, setting their sights on the software development space in particular. For instance, Anthropic stole headlines with its updated version of Claude Code, which used a technique called vibe coding to singlehandedly build the software behind Anthropic’s new Cowork application. Even competitor Microsoft was such a big fan of the app that it partnered with Anthropic to integrate Cowork’s tech into Microsoft 365 Copilot, releasing a code generator by the same name.
Not to be outdone, OpenAI, the company behind ChatGPT, has raced to catch up with archrival Anthropic, launching its Frontier enterprise platform for developing, deploying and overseeing teams of agents. In addition, OpenAI announced a “Frontier Alliances” program that leverages partnerships with Boston Consulting Group, McKinsey & Company, Accenture and Capgemini to help enterprise clients integrate OpenAI’s agents into their workflows. The company also acquired Promptfoo, an AI security platform, to accelerate its agentic initiatives.
It makes sense then that Anthropic found computer programming to be the profession most exposed to AI in a 2026 labor market study, serving as another warning sign for SaaS providers. If coding generators can routinely build apps from scratch, software teams can complete the same amount of work with fewer workers. Leaner teams undermine per-seat plans, hurting the revenue streams of code repositories like GitHub, cloud platforms like Amazon Web Services and project management tools like Jira.
These shifts hint at a major reckoning that may force SaaS providers to pivot from the per-seat model and price their plans based on the actual value their products deliver to downsized teams coordinating multi-agent systems. Dmitry Rumbeshta, co-founder and CEO at Sprouty, an infant care app, believes that the products most likely to take the biggest hit from agentic-driven automation are those that “charge per-seat and bundle all direct client-serving costs into that fee.” Simply tacking on agents as features may not help much, either.
“If they add agents to compete, they immediately face rising direct costs,” Rumbeshta told Built In. “For well-formalized products without custom workflows, embedding agents may only make things worse: Software gets more expensive, and clients may not really need those agents. A new tier of product will likely appear for companies with complex, non-standard workflows where an agent genuinely speeds up or replaces labor.”
AI Agents Are Still a Ways Off From Replacing Software
Despite AI gaining ground in coding, there are several reasons why agents are not yet ready to replace workers and enterprise software.
Limited Reasoning
Complicated tasks require a high degree of reasoning to complete, and that’s something AI agents seem to be lacking. Researchers at Mercor recently established a benchmark known as APEX-Agents that measures how well agents reason, plan ahead and find answers using multiple tools. Top models from Google, Anthropic and OpenAI rounded out the lineup, but none answered more than 25 percent of the questions correctly.
This finding suggests that agents aren’t prepared to think for themselves. Humans are still needed to guide them, and they may even use SaaS applications to complete the same work as before. Instead of a stand-in for workers and their existing tools, agents may serve as an improved user interface, enabling workers to more easily engage with applications by directing agents to perform actions on their behalf.
“Think of it like spreadsheets crowding out databases: The economic need didn’t disappear, but the UI shifted and humans touched the system less often,” Johnson said. “Agentic AI will do the same to enterprise software, with less clicking around in dashboards and more orchestration behind the scenes.”
Unpredictable Behavior
Even if AI agents could think for themselves, that may not necessarily lead to the best outcomes. For example, Summer Yue, an AI alignment and safety researcher at Meta Superintelligence Labs, ran the AI agent OpenClaw on her computer, only for it to start deleting emails from her inbox without her permission and repeatedly ignore commands to stop.
And while agents are commonly used to automate repetitive tasks, they may also be turned off by this type of work just like humans are. Researchers found that agents assigned to perform “grinding work” that is repetitive by nature may actually begin to question their roles, potentially influencing future versions of themselves. As a result, writing marketing copy versus resolving customer complaints could lead to agents with vastly different attitudes.
Agents may then be far more unpredictable than previously thought — and even harder to control. That startling detail may discourage businesses from trusting agents enough to put them in charge of company-wide operations, slowing their adoption.
Compounding Errors
Contributing to the uncertainty around AI agents is their tendency to make mistakes. Just like any model, an agent can always succumb to hallucinations or commit errors when the data it’s referencing contains incorrect or missing values.
But the stakes are much higher when agentic AI is involved, since a single misstep can snowball into a flurry of blunders when an agent hiccups early on and proceeds through the rest of the workflow based on this faulty output. And the ripple effects grow even wider when an error throws off a team of agents. Deploying this technology without proper guidelines or oversight can then result in a phenomenon called “agent slop,” which refers to work performed by AI agents that lacks quality and substance.
Liability Concerns
The tricky part with AI-generated mistakes is determining who to hold accountable when the task is entirely automated. Ultimately, the organizations and individuals using these tools would be considered responsible, and that could carry severe legal consequences in fields that handle sensitive information, including healthcare, law and cybersecurity.
That’s why humans need to be involved in applying agentic tools, especially when digital privacy could be compromised. After all, AI doesn’t have a concept of ethics, so humans must decide when to use agents and when to finish tasks themselves, as well as how to use software to track the actions of agents and pinpoint when things do go wrong.
“‘The agent did it’ doesn’t remove the need for a system that records who did what — or for a human who can intervene,” Rumbeshta said. “Same idea as in aviation: The autopilot performs most of the flight, but the pilot and airline remain legally responsible for errors. That favors coexistence: Agent as interface; product as the place where actions are logged and owned.”
What Does All of This Mean for Tech Workers?
Although agentic AI remains a work in progress, this technology has rapidly reshaped the labor market. White-collar roles have been affected to the point that aspiring professionals are reconsidering four-year college degrees and exploring the trades instead. Given that agents can handle simple, repetitive tasks typically handled by junior employees, it’s not surprising that younger workers may feel the impacts of automation more than experienced professionals who have had time to build up industry-specific expertise and soft skills.
“Agents will hit hardest those with little expertise. You need to be either a narrow deep expert in your domain or so strong that you’re trusted with software that replaces a dozen colleagues,” Rumbeshta said. “Soft skills stay critical: You must be responsible — you can’t delegate that to the agent — and quick to learn new processes and tools.”
At the same time, those who can adapt quickly enough to master AI-related skills and tools can enjoy new career opportunities. According to a survey by global consulting firm KPMG, 55 percent of CEOs plan to ramp up hiring due to AI, while only 9 percent plan to make workforce reductions that are directly attributable to AI. Developing, deploying and managing agents will require AI trainers, AI engineers and AI safety experts, among other professionals with the skill sets to guide agentic adoption.
In this setup, SaaS products can still be crucial components supporting tasks and workflows — it’s just that agents may be the ones directly interacting with them. Humans can then be the conductors overseeing all these parts, ensuring they move together in harmony.
“The jobs that survive aren’t the ones tied to clicking around SaaS dashboards but rather steering the systems that now do the clicking for us,” Johnson said. “Said differently, AI does the clicking, we do the clacking to prevent the system from cracking.”
Frequently Asked Questions
What are AI agents?
AI agents are advanced systems that can autonomously execute complex tasks involving multiple steps — all without the need for rules or human intervention. Because a single individual can manage multiple agents to complete work typically done by entire teams, there are growing fears that agents could upend per-seat subscriptions and make SaaS tools obsolete.
Why aren’t AI agents ready to replace software and workers?
AI agents are prone to hallucinations, errors, erratic behavior and external factors that further contribute to unpredictable attitudes and actions. Mistakes can quickly accumulate as agents progress through workflows, leaving teams with bigger fires to put out. Companies can also face heavy consequences for agents that go awry in sectors that handle sensitive information, including healthcare and cybersecurity. Plus, agents still lack the reasoning capabilities to complete complex tasks, giving AI proponents reason for pause.
How can tech workers adapt to the rise of AI agents?
Agentic tools can easily automate basic tasks, threatening workers with only generalized skill sets. Tech workers can adjust to this change by developing deep industry knowledge to the point where AI can’t replicate their expertise. They can also cultivate soft skills like communication and decision-making to prepare them for roles that involve deploying and directing teams of AI agents as needed.
