Is Agentic AI Actually Going to Crash the Global Economy in 2 Years?

A recent article from Citrini Research suggests we’re on the verge of an AI-driven economic meltdown. How likely is their version of the future?

Written by Richard Johnson
Published on Mar. 02, 2026
A robotic hand reaches out of a laptop screen as hands type on it with a downward economic trendline on top of it
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Feb 27, 2026
Summary: Despite the dire predictions of the recent Citrini Research report, the risks agentic AI poses risks to white-collar jobs don’t mean total economic collapse is likely. Since consumer spending drives GDP, firms will likely curb AI investment if demand drops. Further, proactive policy now can ensure stability.

Between boom and doom lies the risk of predicting the outcome too early. Amidst high uncertainty, when AI starts to look like a boom for a small few and a bust for the rest of us, almost any future can feel believable. But possibility is a matter of imagination; plausibility is a matter of statistical clarity and historical analysis.

Citrini Research’s recent article “2028 Global Intelligence Crisis” imagines agentic AI rapidly hollowing out white-collar labor, collapsing consumer demand and triggering a self-reinforcing economic contraction. Because AI systems replicate and improve efficiency but don’t consume goods in the marketplace, the report argues there would be no natural brake on the spiraling process.

It’s a bold scenario and a useful provocation. In that spirit, I want to thank Citrini Research for putting it on the table. I also want to carefully unpack where their argument holds, where it wobbles and where it collides with economic reality.

How Can Policymakers Prevent an AI-Driven Economic Collapse?

The recent article from Citrini Research offered a sobering vision of an economy hollowed out by AI. These policy steps can prevent the collapse.

  1. Wage-linked productivity sharing.
  2. Large-scale workforce transition programs.
  3. Incentives for AI-human complementarity.
  4. Guardrails on fully autonomous agent deployment.
  5. Strengthening automatic stabilizers.

More From Richard JohnsonBeyond the Tech Generalist: How to Navigate the New Era of Career Hyper-Specialization

 

AI Adoption Isn’t Frictionless

Citrini’s scenario rests on the idea that AI can effectively replace white‑collar labor almost instantly, but real economies don’t behave that way. Even highly automatable tasks run into organizational, regulatory and coordinational frictions that slow adoption. Enterprises don’t deploy automation simply because it’s technically feasible. They deploy it when they’ve redesigned workflows, addressed liability, updated compliance frameworks, ensured they have sufficient data quality and aligned their incentives across teams. Trying to implement an organization-wide AI initiative before all these elements are in place leads to bottlenecks and, at worst, to outright failure.

Such adoption friction is partly the reason productivity gains from past technologies, from electrification to cloud computing, took years to fully materialize. Agentic AI will undoubtedly accelerate automation, but the notion that millions of white‑collar jobs could disappear in a two‑year window assumes a level of institutional agility that simply does not exist in the real economy.

 

Displacement Doesn’t Mean Disappearance

In every major automation wave, task substitution and task creation have unfolded simultaneously. Even when automation reduces labor demand in one function, it often increases demand in others such as oversight and audit, integration and orchestration, data governance, compliance and risk management, human‑in‑the‑loop decision-making and customer trust or relationship roles.

Agentic AI will certainly compress demand for some categories of white‑collar work, but it will also generate new categories of work, focused on managing, supervising and integrating these systems into real organizational workflows. The net effect is uncertain, but it is definitely not a one‑way displacement spiral.

 

Labor and Income in Lockstep 

Citrini’s argument also assumes that AI‑generated output remains economically meaningful even if human consumption collapses, but income and spending are two sides of the same coin. In the US, consumer spending accounts for roughly 68 to 70 percent of GDP, and most of that spending is financed by income. White‑collar wages do account for a disproportionate share of consumer spending in advanced economies.

Suppose white‑collar earnings fall sharply, resulting in an immediate, broad-based decline in consumption. Under those conditions, companies would not continue pouring capital into AI capacity simply because the technology is advancing. They invest based on expected returns, not technological momentum. 

A collapse in household demand would translate directly into weaker revenue, tighter margins and reduced cash flow, all of which constrain capital expenditure. That creates a natural brake on runaway agentic expansion: AI does not autonomously allocate capital, and humans respond to falling demand by slowing investment, not accelerating it.

The idea of “ghost GDP” only holds if firms keep scaling AI even as their customer base evaporates, which is an unlikely outcome given how most businesses operate.

 

Where this Report Has Legs

The scenario is directionally right about several structural pressures, even if its conclusions overreach. White‑collar wage compression is likely as AI absorbs more routine cognitive tasks. Demand erosion in SaaS, consulting and professional services is also plausible as AI substitutes for human intermediaries and reduces the need for large, client‑side teams. We’ve already seen signs of tension from large firms such as McKinsey and Accenture

Pressure to remain profitable will intensify in industries where AI agents can perform high‑value tasks at near‑zero costs, undermining long‑standing pricing power and accelerating competition. And inequality could widen if the productivity gains from agentic systems accrue primarily to capital owners rather than workers, a pattern consistent with prior automation cycles. 

These risks don’t guarantee a macroeconomic collapse, but they do signal the need for proactive policy and institutional adaptation to ensure that AI‑driven productivity gains translate into broad‑based economic resilience rather than concentrated advantage.

 

How Policymakers Stop the Collapse?

If the goal is to avoid an economic collapse, policy intervention is an essential tool. Here are a few high-level levers policymakers can tug on.

1. Wage‑Linked Productivity Sharing

If AI increases output per worker, firms should be incentivized to share gains with employees through wage floors, profit‑sharing or tax‑advantaged compensation structures. For example, MassPay offered profit-sharing bonuses as an incentive to employees who used AI. This resulted in pay bumps ranging from 18 to 15 percent of their annual salary.

2. Large‑Scale Workforce Transition Programs 

Retraining must shift from simple “skills bootcamps” to fundamental role redesign, helping workers move into oversight, integration and trust‑based functions that AI cannot easily replicate.

3. Incentives for Human‑AI Complementarity

Tax credits or procurement preferences can encourage firms to adopt AI in ways that augment rather than replace human labor. Similar to clean energy credits, the government could offer a higher tax deduction for AI deployments that maintain or increase headcount. For example, if a company boosts output using Copilot and their staff remains on payroll, they receive tax credits.

4. Guardrails on Fully Autonomous Agent Deployment

Just as financial markets regulate algorithmic trading, governments can regulate high‑autonomy AI agents in domains where systemic risk is high. For example, incorporating immediate kill-switches when AI agents start to abnormally drift beyond a reasonable threshold or having human-in-the-loop checkpoints for consistent monitoring and anomaly detection ensures technological stability and fairness.

5. Strengthening Automatic Stabilizers 

Expanding unemployment insurance, wage insurance and portable benefits would cushion income shocks and maintain consumer demand.

More on the AI-Driven EconomyWhat Will Happen When the AI Bubble Bursts?

 

Collapse Is Unlikely, but Change Is Inevitable

The Global Intelligence Crisis is a useful provocation, but its core claim — a frictionless, irreversible collapse of white‑collar labor — is economically unlikely. Real economies are slower, messier and far more adaptive than the scenario assumes. 

Agentic AI will reshape white‑collar work, but the trajectory will be mediated by institutions, incentives and policy choices rather than dictated by technological capability alone. The real vulnerability isn’t that AI will outrun the economy, but rather that institutions may fail to adapt quickly enough to channel AI‑driven productivity gains into broad‑based prosperity. 

The future isn’t predetermined, but avoiding the worst‑case outcomes requires deliberate action now to ensure that rising productivity translates into stable incomes, resilient demand, and a labor market that evolves rather than collapses.

Explore Job Matches.