AI In Motion - The Institute of Applied's - a note about AI, technology and where we go from here

The Hidden Cost of AI Slop

Written by Siobhán O'Leary | Apr 3, 2026 11:46:55 AM

Why verification, policy and knowing when not to use AI are now business critical capabilities. 

The problem nobody budgets for

Every AI pitch deck promises faster outputs, fewer errors, more capacity. None of them include the line item for what happens when those outputs are wrong.

Stanford and BetterUp research estimates that around 15% of the work US desk workers receive from colleagues is now what researchers call "workslop": AI generated output that looks polished, reads confidently, and is either inaccurate, context free, or both. Employees spend roughly 4.3 hours per week checking this content, costing approximately $14,200 per employee per year in lost productivity. For a team of ten knowledge workers in a business turning over €5 to €50 million, that is over €140,000 annually in verification overhead, before any productivity gain lands on the balance sheet.

Globally, business losses attributed to AI hallucinations reached an estimated $67.4 billion in 2024, with 82% of AI bugs in production traced to hallucinations rather than system failures.

This is not a technology problem. It is a capability gap. Organisations rushed to deploy AI without building the operational discipline to use it well. The result is not just financial. Correcting AI slop leaves employees frustrated, confused, and disengaged. Trust erodes. Adoption stalls. The tool gets blamed, but the root cause is almost always the same: no policy, no verification process, and no clarity about where AI should and should not be used.

Three ways AI slop enters your business

1. Shadow AI

Recent data shows that 57% of employees admit to inputting sensitive company data into free tier AI tools via personal accounts. Organisations with 11 to 50 employees are among the most exposed, averaging 269 unsanctioned AI tools per 1,000 employees. This is not an AI strategy. This is AI happening to your business without your knowledge or consent. When there is no policy, there is no quality control.

2. Prompt theatre

Some teams use AI to look productive rather than to solve defined problems. Reports get generated because they can be, not because they were needed. Slide decks appear in minutes, filled with plausible sounding analysis that nobody verified. The volume of output goes up. The quality of decisions does not.

3. Content recycling

When AI generated content is published, shared internally, or stored in knowledge bases without review, it can feed back into the retrieval pipelines of other AI tools. The result is a gradual erosion of quality where models draw on content that was itself generated rather than verified. What started as one unchecked output becomes systemic.

What good looks like: three building blocks

Addressing AI slop is not about slowing down. It is about building the capability to use AI well. Three building blocks consistently separate organisations getting real value from those accumulating hidden cost.

1. A clear AI use policy

Not a 40 page compliance document. A readable, one page framework that answers three questions for every team member: What AI tools are we allowed to use? What information can and cannot go into them? Who do I ask when I am not sure?

This is the simplest governance intervention an organisation can make, and one of the most effective. It does not require a legal team. It requires a leadership decision to be clear about boundaries.

2. Verification built into the workflow

Checking AI output should not be an afterthought. It should be a designed step in any workflow where AI contributes to a deliverable, a decision, or a customer facing communication. This means defining who checks AI output, at what stage, and against what criteria, treating verification the same way you treat quality assurance in any other process.

For most organisations, this does not require new tools. It requires new habits and clear accountability.

3. Use case clarity

The organisations getting the most from AI are not using it everywhere. They are using it deliberately, in defined workflows, with clear success measures.

Use case clarity means knowing where AI genuinely adds value, where it needs tight human oversight, and where it simply should not be used. Most organisations skip this step entirely and go straight to tool procurement. The result is the AI equivalent of buying gym equipment nobody uses: expensive, well intentioned, and gathering dust.

Building AI Capability. Responsibly.

AI slop is a symptom. The underlying condition is a capability gap.

Most organisations now have access to powerful AI tools. What they lack is the practical literacy, governance, and workflow design to get value from them without creating new risks.

At The Institute of Applied AI, we work with organisations to close that gap. Not by adding more tools, but by building the internal muscle to use AI deliberately, safely, and with confidence. That starts with the same three building blocks outlined here: policy, verification, and use case design.

If you recognise your own organisation in these patterns, you are not behind. You are where most organisations are right now. The difference is what you do next.

Start here: download our one page AI use policy template

A practical, ready to adapt framework designed for teams that need clarity without complexity. It covers what AI tools are permitted, what data is off limits, and who is accountable when questions arise.

It is free, it takes ten minutes to adapt, and it is the single most effective first step from AI slop to AI capability.

[Download the AI Use Policy Template →]

Enter your email to receive the template and occasional updates from The Institute of Applied AI. No spam. Unsubscribe anytime.

Sources

  1. Stanford / BetterUp (2025). Research on AI generated "workslop" and employee productivity impact. BetterUp
  2. Four Dots (2024). "Business Impact of AI Hallucinations: Rates and Ranks." fourdots.com/business-impact-of-ai-hallucinations-rates-and-ranks
  3. TechTarget (2026). "AI Slop: The Hidden Enterprise Risk CIOs Can't Ignore." techtarget.com
  4. ESCP Business School (2026). "How to Prevent AI Slop from Taking Over Your Workplace." escp.eu
  5. TechCentral Ireland (2025). "Risky Shadow AI Use Remains Widespread." techcentral.ie

About the author: Siobhán O'Leary is an Applied AI Advisor and co-founder of The Institute of Applied AI, helping organisations build AI capability grounded in literacy, governance, and practical adoption. She publishes AI in Motion, a weekly newsletter for leaders navigating AI beyond the headlines.