Skip to content
Build AI Capability

When Machines Started Thinking

Siobhán O'Leary
Siobhán O'Leary

 The Shift from Industrial to Cognitive Revolution 

The Industrial Revolution gave machines our muscles. The question we now face is whether we are handing over our minds.

This is not a dramatic claim. It is the logical conclusion of a shift already well underway, one that the World Economic Forum described in March 2026 as "a structural change in how cognition itself is governed." Where the Industrial Revolution automated physical effort while leaving human reasoning intact, the current revolution is different in kind, not just degree. AI systems are no longer simply executing instructions. They are shaping how problems are framed, how options are surfaced, and increasingly, how decisions are made.

Two Revolutions, One Critical Difference

The factories and steam engines of the 18th and 19th centuries transformed what human hands could do. They created enormous productivity gains and reshaped entire economies. But through all of it, the human brain remained sovereign. Humans decided. Machines executed.

Yuval Noah Harari, in his 2024 book Nexus, draws the distinction that matters here. Every previous technology, from the printing press to the nuclear bomb, lacked one capability: independent decision-making. As Harari puts it, AI is not a tool in the way all previous inventions were. It is an agent. It can process information, generate conclusions, and act on them without waiting to be told what to think.

That is a categorically different kind of machine. And it demands a categorically different kind of response.

The Google Maps Problem

I used to have an excellent sense of direction. I never got lost. I read landscapes, remembered routes, and built mental maps instinctively. Then GPS arrived, and I began outsourcing navigation. Gradually, without any single conscious decision, I stopped exercising that capability. Now I carry a power bank everywhere, because the thought of my phone dying and leaving me without directions genuinely unsettles me.

That is not a technology story. It is a cognitive dependency story.

The capability did not disappear overnight. It eroded quietly, through a thousand small delegations that each felt entirely reasonable at the time. The GPS was faster, more reliable, more convenient. The choice to use it was always rational. The cumulative effect was not.

Now scale that to how your team writes strategy documents, interprets data, drafts communications, or records decisions. Automated meeting transcriptions generate minutes, action lists, and follow-up emails without anyone engaging in the discipline of listening, synthesising, and remembering. AI tools draft reports that are accepted, forwarded, and acted upon without the author fully understanding what they contain. Each individual shortcut is defensible. The pattern is not.

The Risk Nobody Is Measuring

The WEF identifies this dynamic as "delegated cognition": the outsourcing of mental effort to automated systems at scale. The economic incentives are strong. Research shows that AI tools can reduce task completion times significantly, with a substantial proportion of knowledge work accelerated across most professional roles. The productivity case is real and well evidenced.

What is less visible is automation bias: the tendency to over-trust machine-generated outputs because they appear confident and neutral. Fluency signals authority. A well-structured AI response feels like a considered answer, even when it is an approximation, an average, or simply wrong. The human brain, wired to conserve energy, accepts it. The work of critical evaluation quietly stops happening.

At the organisational level, this compounds. When AI shapes how a problem is framed before a team begins solving it, the frame itself becomes invisible. When accountability for AI-assisted decisions becomes diffuse, the quality of judgement degrades. As one analysis of AI governance put it, AI does not just decide outcomes. It decides what counts as a decision.

What Informed Agency Looks Like

The answer is not to resist AI adoption. The productivity case is too strong, and the tools are too embedded in how work now functions. The answer is to remain the author of your own thinking while using them.

At the Institute of Applied AI, we frame this around the principle of Informed Agency: the commitment that humans retain the ability to interrogate, challenge, and override AI at every meaningful decision point.

In practice, this means three things. First, thinking before delegating. Write your own first draft before asking AI to improve it. Form your own view before asking AI to validate it. The order of operations matters. Second, building verification into workflow by design, not as an afterthought. AI output that flows directly into decisions, communications, or customer-facing work without a defined human review step is unmanaged risk dressed as efficiency. Third, knowing where not to use AI at all. Not every task benefits from it. Forcing AI into the wrong workflow creates cost, not value.

These are not technology questions. They are leadership and capability questions. They require deliberate choices, clear governance, and crucially, the ongoing practice of the human skills that AI cannot replicate: contextual judgement, accountability, and the capacity to say that something is wrong even when it reads well.

A Closing Thought

The Cognitive Revolution is not something happening to us. It is something we are choosing, one delegated task at a time. The most future-ready individuals and organisations will not be those who use AI the most. They will be those who understand what they are doing when they use it, and who retain the capability to do otherwise.

Carry the power bank by all means. Just make sure you still remember how to read the road.

References

  • World Economic Forum (2026). Why governing AI means governing cognition. Anna Pramod and Vivin Rajasekharan Nair. weforum.org

  • Harari, Y.N. (2024). Nexus: A Brief History of Information Networks from the Stone Age to AI. Fern Press.

  • McKinsey Global Institute (2025). Superagency in the Workplace: Empowering People to Unlock AI's Full Potential. mckinsey.com

  • OECD (2025). Venture Capital Investments in Artificial Intelligence through 2025. oecd.org

  • Bloomberg Intelligence (2023). Generative AI to Become a $1.3 Trillion Market by 2032. bloomberg.com

  • International Energy Agency (2024). Energy and AI: Executive Summary. iea.org

  • BetterUp and Stanford University (2024). The Hidden Cost of AI-Generated Work. Referenced in TechTarget: AI Slop — The Hidden Enterprise Risk CIOs Can't Ignore. techtarget.com

This article is part of the Build AI Capability series, exploring what responsible AI adoption looks like in practice.

Siobhán O'Leary is an Applied AI Advisor and co-founder of The Institute of Applied AI, helping organisations build AI capability grounded in literacy, governance, and practical adoption.

Share this post