<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>AI In Motion - The Institute of Applied's - a note about AI, technology and where we go from here</title>
    <link>https://www.theinstituteofappliedai.com/insights</link>
    <description>AI In Motion - a note about AI, technology and where we go from herE</description>
    <language>en</language>
    <pubDate>Thu, 09 Apr 2026 09:01:13 GMT</pubDate>
    <dc:date>2026-04-09T09:01:13Z</dc:date>
    <dc:language>en</dc:language>
    <item>
      <title>The Hidden Cost of AI Slop</title>
      <link>https://www.theinstituteofappliedai.com/insights/the-hidden-cost-of-ai-slop</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.theinstituteofappliedai.com/insights/the-hidden-cost-of-ai-slop" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.theinstituteofappliedai.com/hubfs/A%20laptop%20flooded%20with%20crumpled%20papers.png" alt="The Hidden Cost of AI Slop" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h3&gt;Why verification, policy&amp;nbsp;and knowing when not to use AI are now business critical capabilities.&amp;nbsp;&lt;/h3&gt; 
&lt;h2&gt;The problem nobody budgets for&lt;/h2&gt; 
&lt;p&gt;Every AI pitch deck promises faster outputs, fewer errors, more capacity. None of them include the line item for what happens when those outputs are wrong.&lt;/p&gt;</description>
      <content:encoded>&lt;h3&gt;Why verification, policy&amp;nbsp;and knowing when not to use AI are now business critical capabilities.&amp;nbsp;&lt;/h3&gt; 
&lt;h2&gt;The problem nobody budgets for&lt;/h2&gt; 
&lt;p&gt;Every AI pitch deck promises faster outputs, fewer errors, more capacity. None of them include the line item for what happens when those outputs are wrong.&lt;/p&gt;  
&lt;p&gt;Stanford and BetterUp research estimates that around 15% of the work US desk workers receive from colleagues is now what researchers call "workslop": AI generated output that looks polished, reads confidently, and is either inaccurate, context free, or both. Employees spend roughly 4.3 hours per week checking this content, costing approximately $14,200 per employee per year in lost productivity. For a team of ten knowledge workers in a business turning over €5 to €50 million, that is over €140,000 annually in verification overhead, before any productivity gain lands on the balance sheet.&lt;/p&gt; 
&lt;p&gt;Globally, business losses attributed to AI hallucinations reached an estimated $67.4 billion in 2024, with 82% of AI bugs in production traced to hallucinations rather than system failures.&lt;/p&gt; 
&lt;p&gt;This is not a technology problem. It is a capability gap. Organisations rushed to deploy AI without building the operational discipline to use it well. The result is not just financial. Correcting AI slop leaves employees frustrated, confused, and disengaged. Trust erodes. Adoption stalls. The tool gets blamed, but the root cause is almost always the same: no policy, no verification process, and no clarity about where AI should and should not be used.&lt;/p&gt; 
&lt;h3&gt;Three ways AI slop enters your business&lt;/h3&gt; 
&lt;h4&gt;1. Shadow AI&lt;/h4&gt; 
&lt;p&gt;Recent data shows that 57% of employees admit to inputting sensitive company data into free tier AI tools via personal accounts. Organisations with 11 to 50 employees are among the most exposed, averaging 269 unsanctioned AI tools per 1,000 employees. This is not an AI strategy. This is AI happening to your business without your knowledge or consent. When there is no policy, there is no quality control.&lt;/p&gt; 
&lt;h4&gt;2. Prompt theatre&lt;/h4&gt; 
&lt;p&gt;Some teams use AI to look productive rather than to solve defined problems. Reports get generated because they can be, not because they were needed. Slide decks appear in minutes, filled with plausible sounding analysis that nobody verified. The volume of output goes up. The quality of decisions does not.&lt;/p&gt; 
&lt;h4&gt;3. Content recycling&lt;/h4&gt; 
&lt;p&gt;When AI generated content is published, shared internally, or stored in knowledge bases without review, it can feed back into the retrieval pipelines of other AI tools. The result is a gradual erosion of quality where models draw on content that was itself generated rather than verified. What started as one unchecked output becomes systemic.&lt;/p&gt;  
&lt;h3&gt;What good looks like: three building blocks&lt;/h3&gt; 
&lt;p&gt;Addressing AI slop is not about slowing down. It is about building the capability to use AI well. Three building blocks consistently separate organisations getting real value from those accumulating hidden cost.&lt;/p&gt; 
&lt;h4&gt;1. A clear AI use policy&lt;/h4&gt; 
&lt;p&gt;Not a 40 page compliance document. A readable, one page framework that answers three questions for every team member: What AI tools are we allowed to use? What information can and cannot go into them? Who do I ask when I am not sure?&lt;/p&gt; 
&lt;p&gt;This is the simplest governance intervention an organisation can make, and one of the most effective. It does not require a legal team. It requires a leadership decision to be clear about boundaries.&lt;/p&gt; 
&lt;h4&gt;2. Verification built into the workflow&lt;/h4&gt; 
&lt;p&gt;Checking AI output should not be an afterthought. It should be a designed step in any workflow where AI contributes to a deliverable, a decision, or a customer facing communication. This means defining who checks AI output, at what stage, and against what criteria, treating verification the same way you treat quality assurance in any other process.&lt;/p&gt; 
&lt;p&gt;For most organisations, this does not require new tools. It requires new habits and clear accountability.&lt;/p&gt; 
&lt;h4&gt;3. Use case clarity&lt;/h4&gt; 
&lt;p&gt;The organisations getting the most from AI are not using it everywhere. They are using it deliberately, in defined workflows, with clear success measures.&lt;/p&gt; 
&lt;p&gt;Use case clarity means knowing where AI genuinely adds value, where it needs tight human oversight, and where it simply should not be used. Most organisations skip this step entirely and go straight to tool procurement. The result is the AI equivalent of buying gym equipment nobody uses: expensive, well intentioned, and gathering dust.&lt;/p&gt;  
&lt;h3&gt;Building AI Capability. Responsibly.&lt;/h3&gt; 
&lt;p&gt;AI slop is a symptom. The underlying condition is a capability gap.&lt;/p&gt; 
&lt;p&gt;Most organisations now have access to powerful AI tools. What they lack is the practical literacy, governance, and workflow design to get value from them without creating new risks.&lt;/p&gt; 
&lt;p&gt;At The Institute of Applied AI, we work with organisations to close that gap. Not by adding more tools, but by building the internal muscle to use AI deliberately, safely, and with confidence. That starts with the same three building blocks outlined here: policy, verification, and use case design.&lt;/p&gt; 
&lt;p&gt;If you recognise your own organisation in these patterns, you are not behind. You are where most organisations are right now. The difference is what you do next.&lt;/p&gt; 
&lt;h3&gt;Start here: download our one page AI use policy template&lt;/h3&gt; 
&lt;p&gt;A practical, ready to adapt framework designed for teams that need clarity without complexity. It covers what AI tools are permitted, what data is off limits, and who is accountable when questions arise.&lt;/p&gt; 
&lt;p&gt;It is free, it takes ten minutes to adapt, and it is the single most effective first step from AI slop to AI capability.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;[Download the AI Use Policy Template →]&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;Enter your email to receive the template and occasional updates from The Institute of Applied AI. No spam. Unsubscribe anytime.&lt;/em&gt;&lt;/p&gt; 
&lt;h4&gt;Sources&lt;/h4&gt; 
&lt;ol&gt; 
 &lt;li&gt;Stanford / BetterUp (2025). Research on AI generated "workslop" and employee productivity impact. &lt;a href="https://www.betterup.com"&gt;BetterUp&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;Four Dots (2024). "Business Impact of AI Hallucinations: Rates and Ranks." &lt;a href="https://fourdots.com/business-impact-of-ai-hallucinations-rates-and-ranks"&gt;fourdots.com/business-impact-of-ai-hallucinations-rates-and-ranks&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;TechTarget (2026). "AI Slop: The Hidden Enterprise Risk CIOs Can't Ignore." &lt;a href="https://www.techtarget.com/searchcio/feature/AI-Slop-The-hidden-enterprise-risk-CIOs-cant-ignore"&gt;techtarget.com&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;ESCP Business School (2026). "How to Prevent AI Slop from Taking Over Your Workplace." &lt;a href="https://escp.eu/thechoice/tomorrow-choices/how-to-prevent-ai-slop-from-taking-over-your-workplace/"&gt;escp.eu&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;TechCentral Ireland (2025). "Risky Shadow AI Use Remains Widespread." &lt;a href="https://www.techcentral.ie/risky-shadow-ai-use-remains-widespread/"&gt;techcentral.ie&lt;/a&gt;&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;About the author:&lt;/strong&gt; Siobhán O'Leary is an Applied AI Advisor and co-founder of The Institute of Applied AI, helping organisations build AI capability grounded in literacy, governance, and practical adoption. She publishes &lt;em&gt;AI in Motion&lt;/em&gt;, a weekly newsletter for leaders navigating AI beyond the headlines.&lt;/p&gt; 
&lt;p&gt;&amp;nbsp;&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=147671322&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.theinstituteofappliedai.com%2Finsights%2Fthe-hidden-cost-of-ai-slop&amp;amp;bu=https%253A%252F%252Fwww.theinstituteofappliedai.com%252Finsights&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Build AI Capability</category>
      <pubDate>Fri, 03 Apr 2026 11:46:55 GMT</pubDate>
      <author>info@theinstituteofappliedai.com (Siobhán O'Leary)</author>
      <guid>https://www.theinstituteofappliedai.com/insights/the-hidden-cost-of-ai-slop</guid>
      <dc:date>2026-04-03T11:46:55Z</dc:date>
    </item>
    <item>
      <title>When Machines Started Thinking</title>
      <link>https://www.theinstituteofappliedai.com/insights/when-machines-started-thinking</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.theinstituteofappliedai.com/insights/when-machines-started-thinking" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.theinstituteofappliedai.com/hubfs/freepik_abstract-image-deep-navy-and-gold-tones-compass-or-map-on-a-table-alongside-a-smartphone-sense-of-navigation-and-orientation-contemplative-rather-than-dramatic.-no-people.-no-text-in-imag_0001.png" alt="When Machines Started Thinking" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h3&gt;&amp;nbsp;&lt;span style="color: #333333; background-color: #ffffff;"&gt;The Shift from Industrial to Cognitive Revolution&lt;/span&gt;&amp;nbsp;&lt;/h3&gt; 
&lt;p&gt;The Industrial Revolution gave machines our muscles. The question we now face is whether we are handing over our minds.&lt;/p&gt;</description>
      <content:encoded>&lt;h3&gt;&amp;nbsp;&lt;span style="color: #333333; background-color: #ffffff;"&gt;The Shift from Industrial to Cognitive Revolution&lt;/span&gt;&amp;nbsp;&lt;/h3&gt; 
&lt;p&gt;The Industrial Revolution gave machines our muscles. The question we now face is whether we are handing over our minds.&lt;/p&gt;  
&lt;p&gt;This is not a dramatic claim. It is the logical conclusion of a shift already well underway, one that the World Economic Forum described in March 2026 as "a structural change in how cognition itself is governed." Where the Industrial Revolution automated physical effort while leaving human reasoning intact, the current revolution is different in kind, not just degree. AI systems are no longer simply executing instructions. They are shaping how problems are framed, how options are surfaced, and increasingly, how decisions are made.&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;Two Revolutions, One Critical Difference&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;The factories and steam engines of the 18th and 19th centuries transformed what human hands could do. They created enormous productivity gains and reshaped entire economies. But through all of it, the human brain remained sovereign. Humans decided. Machines executed.&lt;/p&gt; 
&lt;p&gt;Yuval Noah Harari, in his 2024 book &lt;i&gt;Nexus&lt;/i&gt;, draws the distinction that matters here. Every previous technology, from the printing press to the nuclear bomb, lacked one capability: independent decision-making. As Harari puts it, AI is not a tool in the way all previous inventions were. It is an agent. It can process information, generate conclusions, and act on them without waiting to be told what to think.&lt;/p&gt; 
&lt;p&gt;That is a categorically different kind of machine. And it demands a categorically different kind of response.&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;The Google Maps Problem&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;I used to have an excellent sense of direction. I never got lost. I read landscapes, remembered routes, and built mental maps instinctively. Then GPS arrived, and I began outsourcing navigation. Gradually, without any single conscious decision, I stopped exercising that capability. Now I carry a power bank everywhere, because the thought of my phone dying and leaving me without directions genuinely unsettles me.&lt;/p&gt; 
&lt;p&gt;That is not a technology story. It is a cognitive dependency story.&lt;/p&gt; 
&lt;p&gt;The capability did not disappear overnight. It eroded quietly, through a thousand small delegations that each felt entirely reasonable at the time. The GPS was faster, more reliable, more convenient. The choice to use it was always rational. The cumulative effect was not.&lt;/p&gt; 
&lt;p&gt;Now scale that to how your team writes strategy documents, interprets data, drafts communications, or records decisions. Automated meeting transcriptions generate minutes, action lists, and follow-up emails without anyone engaging in the discipline of listening, synthesising, and remembering. AI tools draft reports that are accepted, forwarded, and acted upon without the author fully understanding what they contain. Each individual shortcut is defensible. The pattern is not.&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;The Risk Nobody Is Measuring&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;The WEF identifies this dynamic as "delegated cognition": the outsourcing of mental effort to automated systems at scale. The economic incentives are strong. Research shows that AI tools can reduce task completion times significantly, with a substantial proportion of knowledge work accelerated across most professional roles. The productivity case is real and well evidenced.&lt;/p&gt; 
&lt;p&gt;What is less visible is automation bias: the tendency to over-trust machine-generated outputs because they appear confident and neutral. Fluency signals authority. A well-structured AI response feels like a considered answer, even when it is an approximation, an average, or simply wrong. The human brain, wired to conserve energy, accepts it. The work of critical evaluation quietly stops happening.&lt;/p&gt; 
&lt;p&gt;At the organisational level, this compounds. When AI shapes how a problem is framed before a team begins solving it, the frame itself becomes invisible. When accountability for AI-assisted decisions becomes diffuse, the quality of judgement degrades. As one analysis of AI governance put it, AI does not just decide outcomes. It decides what counts as a decision.&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;What Informed Agency Looks Like&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;The answer is not to resist AI adoption. The productivity case is too strong, and the tools are too embedded in how work now functions. The answer is to remain the author of your own thinking while using them.&lt;/p&gt; 
&lt;p&gt;At the Institute of Applied AI, we frame this around the principle of Informed Agency: the commitment that humans retain the ability to interrogate, challenge, and override AI at every meaningful decision point.&lt;/p&gt; 
&lt;p&gt;In practice, this means three things. First, thinking before delegating. Write your own first draft before asking AI to improve it. Form your own view before asking AI to validate it. The order of operations matters. Second, building verification into workflow by design, not as an afterthought. AI output that flows directly into decisions, communications, or customer-facing work without a defined human review step is unmanaged risk dressed as efficiency. Third, knowing where not to use AI at all. Not every task benefits from it. Forcing AI into the wrong workflow creates cost, not value.&lt;/p&gt; 
&lt;p&gt;These are not technology questions. They are leadership and capability questions. They require deliberate choices, clear governance, and&amp;nbsp;crucially, the ongoing practice of the human skills that AI cannot replicate: contextual judgement, accountability, and the capacity to say that something is wrong even when it reads well.&lt;/p&gt; 
&lt;h4 style="line-height: 1.5;"&gt;&lt;strong&gt;A Closing Thought&lt;/strong&gt;&lt;/h4&gt; 
&lt;p style="line-height: 1.5;"&gt;The Cognitive Revolution is not something happening to us. It is something we are choosing, one delegated task at a time. The most future-ready individuals and organisations will not be those who use AI the most. They will be those who understand what they are doing when they use it, and who retain the capability to do otherwise.&lt;/p&gt; 
&lt;p&gt;Carry the power bank by all means. Just make sure you still remember how to read the road.&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/h4&gt; 
&lt;ul&gt; 
 &lt;li&gt; &lt;p style="line-height: 1;"&gt;World Economic Forum (2026). &lt;i&gt;Why governing AI means governing cognition&lt;/i&gt;. Anna Pramod and Vivin Rajasekharan Nair. weforum.org&lt;/p&gt; &lt;/li&gt; 
 &lt;li style="line-height: 1;"&gt; &lt;p&gt;Harari, Y.N. (2024). &lt;i&gt;Nexus: A Brief History of Information Networks from the Stone Age to AI&lt;/i&gt;. Fern Press.&lt;/p&gt; &lt;/li&gt; 
 &lt;li style="line-height: 1;"&gt; &lt;p&gt;McKinsey Global Institute (2025). &lt;i&gt;Superagency in the Workplace: Empowering People to Unlock AI's Full Potential&lt;/i&gt;. mckinsey.com&lt;/p&gt; &lt;/li&gt; 
 &lt;li style="line-height: 1;"&gt; &lt;p&gt;OECD (2025). &lt;i&gt;Venture Capital Investments in Artificial Intelligence through 2025&lt;/i&gt;. oecd.org&lt;/p&gt; &lt;/li&gt; 
 &lt;li style="line-height: 1;"&gt; &lt;p&gt;Bloomberg Intelligence (2023). &lt;i&gt;Generative AI to Become a $1.3 Trillion Market by 2032&lt;/i&gt;. bloomberg.com&lt;/p&gt; &lt;/li&gt; 
 &lt;li style="line-height: 1;"&gt; &lt;p&gt;International Energy Agency (2024). &lt;i&gt;Energy and AI: Executive Summary&lt;/i&gt;. iea.org&lt;/p&gt; &lt;/li&gt; 
 &lt;li style="line-height: 1;"&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;BetterUp and Stanford University (2024). &lt;/span&gt;&lt;i style="background-color: transparent;"&gt;The Hidden Cost of AI-Generated Work&lt;/i&gt;&lt;span style="background-color: transparent;"&gt;. Referenced in TechTarget: &lt;/span&gt;&lt;i style="background-color: transparent;"&gt;AI Slop — The Hidden Enterprise Risk CIOs Can't Ignore&lt;/i&gt;&lt;span style="background-color: transparent;"&gt;. techtarget.com&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="background-color: transparent;"&gt;This article is part of the Build AI Capability series, exploring what responsible AI adoption looks like in practice. &lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;i&gt;Siobhán O'Leary is an Applied AI Advisor and co-founder of The Institute of Applied AI, helping organisations build AI capability grounded in literacy, governance, and practical adoption.&lt;/i&gt;&lt;i&gt;&lt;/i&gt;&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=147671322&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.theinstituteofappliedai.com%2Finsights%2Fwhen-machines-started-thinking&amp;amp;bu=https%253A%252F%252Fwww.theinstituteofappliedai.com%252Finsights&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Build AI Capability</category>
      <pubDate>Fri, 03 Apr 2026 11:34:38 GMT</pubDate>
      <author>info@theinstituteofappliedai.com (Siobhán O'Leary)</author>
      <guid>https://www.theinstituteofappliedai.com/insights/when-machines-started-thinking</guid>
      <dc:date>2026-04-03T11:34:38Z</dc:date>
    </item>
    <item>
      <title>The Brewer Who Taught the World to Think in Small Samples</title>
      <link>https://www.theinstituteofappliedai.com/insights/the-brewer-who-taught-the-world-to-think-in-small-samples</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.theinstituteofappliedai.com/insights/the-brewer-who-taught-the-world-to-think-in-small-samples" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.theinstituteofappliedai.com/hubfs/The%20Brewer%20Who%20Taught%20the%20World%20to%20Think%20in%20Small%20Samples.png" alt="The Brewer Who Taught the World to Think in Small Samples" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;i&gt;&lt;span style="color: #666666;"&gt;How a Guinness statistician laid the foundations for modern AI evaluation&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;i&gt;&lt;span style="color: #666666;"&gt;How a Guinness statistician laid the foundations for modern AI evaluation&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;  
&lt;p&gt;Most people know Guinness for the pint. Fewer know it gave the world one of the most important statistical tools ever developed. The story behind it is not just a good pub anecdote. It carries a lesson that matters more today than it did a century ago, particularly for anyone making decisions with AI.&lt;/p&gt; 
&lt;h3&gt;&lt;span style="font-weight: bold;"&gt;The Problem at St James’s Gate&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;&lt;span style="font-weight: bold;"&gt;&lt;/span&gt;In the early 1900s, Guinness had a quality problem. Not with the stout itself, but with the raw materials that went into it. Barley, hops, and malt varied from batch to batch, and the brewery needed a reliable way to assess quality from small samples. The existing statistical methods of the time were designed for large datasets with neat, predictable distributions. Guinness had neither.&lt;/p&gt; 
&lt;p&gt;The company had started hiring Oxford and Cambridge science graduates, embedding rigorous method into the business of brewing. One of those hires was William Sealy Gosset, a chemist by training, a brewer by trade, and (as it turned out) a quiet revolutionary in how we make decisions from limited information.&lt;/p&gt; 
&lt;p&gt;Gosset’s challenge was deceptively simple: how do you draw trustworthy conclusions when you only have a small number of observations? A handful of barley samples. A few batches of malt. High stakes, limited data.&lt;/p&gt; 
&lt;h3&gt;A Pseudonym and a Breakthrough&lt;/h3&gt; 
&lt;p&gt;To solve it, Gosset derived the t‐distribution, a mathematical framework that allowed reliable inferences from small samples. He published his findings in 1908 under the pseudonym “Student” because Guinness, understandably, did not want competitors knowing how seriously it took internal analytics. The resulting method became known as Student’s t‐test.&lt;/p&gt; 
&lt;p&gt;It is not an exaggeration to say this single contribution changed the trajectory of modern science. When researchers today say a result is “statistically significant,” they are often relying on some form of Gosset’s method. It underpins clinical trials in medicine, quality control in manufacturing, A/B testing in digital marketing, and thousands of other applications where decisions must be made from imperfect, incomplete data.&lt;/p&gt; 
&lt;p&gt;What began as a brewing problem became a foundational building block for experimentation and decision making across virtually every industry.&lt;/p&gt; 
&lt;h3&gt;Why This Matters for AI&lt;/h3&gt; 
&lt;p&gt;Here is where the story meets the present moment.&lt;br&gt;Modern AI, especially large language models, are trained on vast datasets. The scale is staggering and the results often impressive. But many real world deployments happen in what statisticians call “small data” regimes: a niche industrial process, a single hospital’s patient cohort, one company’s customer journey. In these contexts, scale alone does not guarantee reliability. You still need statistically robust methods, direct descendants of Gosset’s t‐test, to determine whether an observed improvement from an AI system is genuine or simply noise dressed up as insight.&lt;/p&gt; 
&lt;p&gt;Consider AI powered A/B testing. When a platform tells you that variant B of your website outperforms variant A, the underlying maths often traces back to the same logic Gosset used to compare batches of hops. The question is identical: is this difference real, or could it have happened by chance?&lt;/p&gt; 
&lt;p&gt;The same principle applies to responsible AI evaluation. Assessing a model for bias, safety, or fairness requires carefully designed experiments, controlled comparisons between versions, and significance testing on relatively small, sensitive datasets (for example, performance across different demographic groups). Relying on large aggregate benchmarks alone can mask the very problems that matter most.&lt;/p&gt; 
&lt;h3&gt;Methodology Over Magnitude&lt;/h3&gt; 
&lt;p&gt;In an era obsessed with scale (bigger models, more parameters, more compute), Gosset’s legacy offers a useful counterpoint: methodology matters as much as magnitude. The right approach lets you extract meaningful insight from fewer, better curated data points. This is not a nostalgic argument. It is directly relevant to privacy preserving AI, edge computing, regulated industries, and any organisation that cannot simply throw more data at a problem.&lt;/p&gt; 
&lt;p&gt;For leaders making decisions about AI adoption, the lesson is practical. Before asking “how big is the model?” or “how much data do we need?”, it is worth asking: do we have the statistical rigour to know whether what we are seeing is signal or noise? That question is as old as a brewery in Dublin and as current as any AI deployment in 2026.&lt;/p&gt; 
&lt;h3&gt;From Beer to Boardroom&lt;/h3&gt; 
&lt;p&gt;Gosset never sought fame. He published under a pen name and spent his career inside the brewery. Yet his work sits beneath virtually every “smart” optimisation we encounter today, from drug discovery to recommendation engines, from website funnels to supply chain analytics.&lt;/p&gt; 
&lt;p&gt;The thread connecting 1908 to 2026 is straightforward: trustworthy decisions from imperfect, limited data. That was Gosset’s problem. It remains ours. The tools have changed. The principle has not.&lt;/p&gt; 
&lt;p&gt;Next time someone tells you AI will solve everything with enough data, you might remind them that one of the most consequential breakthroughs in the history of statistics came from a man who had too little of it, and a brewery that had the good sense to hire scientists.&lt;br&gt;&lt;br&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="font-size: 16px;"&gt;&lt;span style="font-weight: bold;"&gt;About the author:&lt;/span&gt;&amp;nbsp; &lt;span style="font-style: italic; color: #666666;"&gt;&lt;a href="https://www.linkedin.com/in/siobhandoleary/"&gt;Siobhán O’Leary&lt;/a&gt; is an Applied AI Advisor and founder of The Institute of Applied AI, helping organisations adopt AI with clarity, capability&amp;nbsp;and responsibility.&lt;/span&gt;&lt;/span&gt;&lt;br&gt;&lt;span style="font-weight: bold;"&gt;&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span style="font-weight: bold; font-size: 14px;"&gt;Sources&lt;/span&gt;&lt;br&gt;&lt;span style="font-size: 14px;"&gt;Scientific American: &lt;a href="https://www.scientificamerican.com/article/how-the-guinness-brewery-invented-the-most-important-statistical-method-in/"&gt;How the Guinness Brewery Invented the Most Important Statistical Method&lt;/a&gt;&lt;/span&gt;&lt;br&gt;&lt;span style="font-size: 14px;"&gt;Minitab Blog: &lt;a href="https://blog.minitab.com/en/blog/michelle-paret/guinness-t-tests-and-proving-a-pint-really-does-taste-better-in-ireland"&gt;Guinness, t-Tests, and Proving a Pint Really Does Taste Better in Ireland&lt;/a&gt;&lt;/span&gt;&lt;br&gt;&lt;span style="font-size: 14px;"&gt;The Conversation: &lt;/span&gt;&lt;a href="https://theconversation.com/the-genius-at-guinness-and-his-statistical-legacy-93134"&gt;&lt;span style="font-size: 14px;"&gt;The Genius at Guinness and His Statistical Legacy&lt;/span&gt;&lt;/a&gt;&lt;i&gt;&lt;br&gt;&lt;/i&gt;&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=147671322&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.theinstituteofappliedai.com%2Finsights%2Fthe-brewer-who-taught-the-world-to-think-in-small-samples&amp;amp;bu=https%253A%252F%252Fwww.theinstituteofappliedai.com%252Finsights&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Building AI Capability</category>
      <pubDate>Tue, 17 Mar 2026 15:48:11 GMT</pubDate>
      <author>info@theinstituteofappliedai.com (Siobhán O'Leary)</author>
      <guid>https://www.theinstituteofappliedai.com/insights/the-brewer-who-taught-the-world-to-think-in-small-samples</guid>
      <dc:date>2026-03-17T15:48:11Z</dc:date>
    </item>
  </channel>
</rss>
