The age of agentic AI is drawing nearer as more businesses explore AI agents — artificial intelligence systems that can execute complicated, multi-step tasks without human intervention or rules to guide them. According to a 2025 McKinsey report, 39 percent of organizations have started experimenting with AI agents, while 23 percent are already scaling their use. As the excitement around this field grows, companies may rush to embrace the technology without much forethought, increasingly leading to a phenomenon known as “agent slop.”
What Is an AI Agent?
An AI agent is an AI-powered system capable of completing challenging, multi-step tasks without any clear direction or human supervision. AI agents can interact with the real world through physical forms like robots, drones and autonomous vehicles, and they can also function as software within a computer system.
Agent slop refers to low-quality work completed by AI agents that have been poorly designed, often lacking proper guardrails and guidelines. While this might seem like a minor issue, the mistakes can add up to major headaches for businesses.
“Workslop, broadly understood as work products made shoddily with AI, is annoying, but so far it comes from individual people using AI, and thus is limited in scale,” Matt Aaronson, senior director of product marketing at AI orchestration platform Tonkean, told Built In. “Agents can continue producing agent slop 24/7. And in a world where every employee inside a company can build and deploy agents, well, you’re potentially creating a massive mess.”
Addressing this problem is crucial for maximizing the benefits of AI agents. Not to mention staying competitive in an enterprise environment where companies are under constant pressure to take their productivity gains to the next level with artificial intelligence.
What Is Agent Slop, and Why Does It Happen?
Agent slop is just the latest form of so-called “slop” to plague AI users. Initially, AI slop referred more broadly to meaningless, AI-generated content produced across social media platforms like Sora and Meta AI. With instances arising in the workplace, “workslop” has come to refer to AI-generated work content that is poorly done and lacks any real substance. Now, AI agents are churning out the same kind of low-quality, useless content, earning the name “agent slop.”
As to why agent slop happens in the first place, the answer isn’t so simple. On one hand, the models powering these agents remain limited, often succumbing to hallucinations and requiring large volumes of high-quality, real-time data to function properly. Data that contains errors, missing values or even values that cover too wide a range can be enough to prevent an AI agent from fulfilling a specific task or workflow.
However, Andreas Welsch, founder and chief AI strategist at Intelligence Briefing and author of AI Leadership Handbook, believes the human element is just as important. In their eagerness to leverage agentic AI, business leaders may push employees to use this technology without offering much guidance or support, leading to the chaotic deployment of agents that haven’t been properly built and trained.
“This mandate, ‘You need to use more AI in your function,’ or, ‘You need to find ways to use more AI,’ oftentimes leads to a situation where employees are left to their own devices,” Welsch told Built In. “Users don’t necessarily know what they don’t know, so they try to figure this out on their own. They cut corners. They miss things that professional developers, that IT experts, would see more easily and more clearly.”
Why Should Businesses Be Concerned About Agent Slop?
Deploying AI agents without any coordination can result in agents getting in each other’s way, automating tasks that may have been better left untouched and causing general confusion among employees — all factors that can sow distrust toward AI. According to research by Pegasystems, 33 percent of workers doubt that agents can deliver high-quality work, and 30 percent don’t trust the accuracy of agent-generated responses. Feeding into this skepticism can spark outright resistance, undermining attempts to innovate with AI-based solutions.
And that’s just the internal damage that agent slop can inflict on a company. If the issue becomes external, an organization could jeopardize its standing and relationships with individual customers, longtime partners and the general public.
“If your business becomes known as delivering low-quality results or even incorrect results, you can expect your customers to demand a refund,” Welsch said. “Your brand image is very likely tarnished, especially if it makes it to the news. And there could even be legal action, because you might give some incorrect advice that somebody else is basing their decisions on.”
In particular, Welsch noted the example of Deloitte’s debacle when developing a report for Australia’s Department of Employment and Workplace Relations. The report contained fake citations and other errors caused by AI hallucinations, which Deloitte failed to correct. While the company provided a partial refund to the Australian government, it’s now known for having a “human intelligence problem.” To avoid this type of fallout, businesses may want to get on top of agent slop before it gets out of control.
What Are Some Steps to Avoid Agent Slop?
Agent slop only becomes a problem if it goes unchecked. Here are a few steps to snuff it out in its earliest stages.
Set Realistic Expectations
Leaders need to emphasize AI’s shortcomings. Be sure all employees know that even the most sophisticated systems can still produce errors and experience hallucinations, so they must review work performed by AI tools and fact-check AI-generated content. Understanding AI’s limitations can help teams decide which tasks to allot to AI agents and which ones require a human touch as well.
Offer Proper Skills Training
It’s not enough to encourage employees to adopt AI agents without adequate training. For instance, an Asana report found that about one-third of workers are unsure of which tasks to delegate to agents, resulting in employees constantly monitoring these tools. This indecision not only slows down agent adoption but also contributes to a scenario where agents create more work for employees and bring down their productivity.
Organizations can provide mentorships, group trainings, access to online courses and other resources to help employees cultivate their AI skills. This way, teams can better understand how to improve their workflows with AI agents, rather than waste time figuring out the best ways to use agents through trial and error.
Define a Clear Strategy Around AI Agents
Failing to communicate how a company plans to use AI agents could feed into employees’ worst fears. According to an Ernst & Young survey, 65 percent of non-people managers worry about their job security when working with AI agents. These doubts can quickly transform into resistance if leaders don’t explain the exact role of agents in the business, so it’s best to clear up this issue before employees draw their own conclusions.
Develop a long-term roadmap that lays out how the organization plans to adopt, deploy and scale agentic AI. Once the details are finalized, share this strategy with all departments, so every employee understands how their roles could evolve alongside AI agents. Establish company policies that determine when agents should and shouldn’t be used as well, so employees don’t randomly deploy their own agents that could produce slop over time.
Cultivate an AI-Friendly Culture
Agent slop can occur if employees don’t feel comfortable sharing with their managers and coworkers how they’re using AI agents. Welsch notes that leaders can easily address this hesitation by adding artificial intelligence to the agenda in regular team meetings. Creating space for employees to discuss their use of AI can encourage new ideas for leveraging these tools and garner more buy-in from all team members.
“If my manager says, ‘AI is really important,’ and I’m seeing this not only because they use it, but my peers are using this, well then, yeah, maybe we are normalizing this. Maybe this is the new way of working,” Welsch said. “And if I have a problem and I’m not so sure and I don’t want to talk to my manager about my concerns, at least I can talk to my peer who’s more advanced and who might know a trick or two.”
Invest in Agentic AI Tools
Businesses that are serious about adopting AI agents may also want to upgrade their tech stacks. Leaders can start by introducing an orchestration platform, which is essentially software that coordinates the actions of a network of AI agents. As Tonkean’s Aaronson notes, investing in such a platform can go a long way toward ensuring employees and agents work in harmony, reducing friction and the likelihood of agent slop occurring.
“It’s a tool for the conductor of your company’s operational orchestra,” Aaronson said. “That conductor doesn’t actually play the instruments, but they make sure the right instruments are being played at the right times. That discipline is what prevents slop.”
Are AI Agents Still the Future of Work?
Despite the risks posed by agent slop, agentic AI is still considered the future of work — and with good reason. A PwC survey found that 66 percent of organizations adopting AI agents have enjoyed increased productivity, 57 percent have seen cost savings and 55 percent have witnessed faster decision-making. In this context, the promise of AI agents may be too good to pass up, even if the possibility of agent slop lingers.
Of course, excitement needs to be tempered with patience. Andrej Karpathy, co-founder of OpenAI, has pushed back against claims that 2025 is the “year of agents,” suggesting that companies may need to spend at least a decade perfecting these tools before unlocking their full potential. After all, agent slop, hallucinations and other challenges still must be overcome. But these issues may just be growing pains that are naturally part of the larger process of a novel technology reshaping society once again.
“Industrial revolutions inevitably produce some kind of chaos before they change the world. This go-round, part of that chaos will be slop,” Aaronson said. “In the long run, though, slop will simply accelerate the need for structure, governance and orchestration. And the organizations that embrace that now will be the ones that realize the full value of agents fastest.”
Frequently Asked Questions
How is agent slop different from other types of AI slop?
Originally, AI slop referred to low-quality, AI-generated content on social media platforms. Once employees started using AI to create work content that looked professional but still lacked substance, workslop became the next big buzzword. Now, agent slop refers to this same poor-quality, meaningless work produced specifically by AI agents. Agent slop has yet to go mainstream, but it may be a matter of time as more organizations embrace agents.
Why does agent slop happen?
Like any AI tool, AI agents can still be affected by hallucinations and a lack of high-quality data. However, businesses rushing employees to use agents without proper training is another major issue. Without any kind of strategy or best practices, employees may randomly deploy their own agents that get in the way of other agents, perform unnecessary work and force teams to put out fires that didn’t need to happen in the first place.
How can companies address agent slop?
To properly integrate AI agents into their operations, businesses can develop clear strategies for deploying agents, educate employees on the shortcomings of AI and provide AI skills training across each department. In addition, managers can make time during meetings to discuss how employees are using AI agents, so everyone can learn from each other, feel more comfortable adopting the technology and be transparent about any use of those agents.
