Skip to content

Are You Asking the Right AI Questions?

Practical answers for leaders and decision-makers navigating Generative AI adoption in their organisations.

Questions Every Leader Should Be Asking

Ten uncomfortable questions most organisations are avoiding but shouldn’t be.
These are the questions that separate organisations building real AI capability from those running expensive experiments. If your leadership team hasn’t discussed these, it’s time to start.

What unique advantage do we bring to AI, beyond buying the same tools as everyone else?

If your AI strategy is just adopting the same vendor stack as your competitors, you’re not building an edge. The real advantage comes from proprietary data, domain expertise, customer relationships or a distinctive operating model. That’s where lasting value lives.

What are we not going to use AI for and why?

The most effective AI strategies start by defining clear boundaries, where AI adds genuine value, and where the risk, cost or complexity simply isn’t worth it. This forces discipline and concentrates investment where it actually matters.

How will we measure real business value once we include all the hidden costs?

Hours saved makes a nice slide, but it’s not a business case. Factor in licensing, data preparation, change management, governance and new risk exposure. Then define proper baselines and control groups. If you can’t measure it properly, you can’t manage it.

Whose workflows and incentives are we about to disrupt?

AI changes who holds information, who makes decisions and who gets credit. If you don’t update incentives, performance metrics and job descriptions alongside the technology, adoption will quietly stall, or provoke active resistance from the people you need most.

What’s our honest policy on shadow AI?

Your people are already pasting customer data, strategy documents and code into external AI tools. Pretending it isn’t happening is a governance failure. The real question is: what will you explicitly allow, monitor and educate around, rather than hoping nobody notices?

What data would we never expose to an external model, even under NDA?

This forces a hard conversation about trade secrets, regulated data and sensitive internal signals. It leads naturally into decisions about private models, on-premises deployment or simply not using AI for certain use cases. If you haven’t had this conversation, your data may already be at risk.

Who’s accountable when AI gets it wrong but everyone followed the rules?

"The human is always responsible" sounds reassuring until something goes wrong and nobody knows who actually signed off. You need clear ownership for each class of AI-assisted decision, defined escalation paths and a realistic picture of what “due diligence” actually looks like.

How would we know if AI is quietly degrading quality over 12–24 months?

Small hallucinations, subtle bias and slightly off-brand content accumulate into reputational and compliance damage before anyone notices. You need ongoing human review, quality scorecards, feedback loops and explicit kill switches for every use case..

What’s our workforce strategy when AI can do 60–80% of certain roles?

Beyond “upskill everyone”, which roles will shrink, which will grow and what are your obligations to affected people? This needs a proper plan for role redesign, redeployment, updated hiring profiles and honest internal communication. Avoiding the conversation doesn’t make it go away.

What’s our narrative when an AI incident goes public?

A bad output, a data leak or a biased decision will eventually surface. If you’re scrambling to explain after the fact, you’ve already lost. You need a prepared story: what you use AI for, how you govern it, how you respond and what protections were already in place.

Strategic Questions to Ask About AI

Ten questions that anchor AI to your business strategy.
These are the questions that move your organisation from AI curiosity to AI capability. They’re designed for leadership teams who want to make informed, strategic decisions rather than reactive ones.

How does AI support our core strategy and competitive advantage?

Your AI journey starts here. Which strategic goals, growth, margin, risk reduction, customer experience, does AI directly support? This anchors your AI activity in your actual business strategy, not in technology experiments. Focus on a few high-value areas where you can genuinely differentiate.

How will AI change our industry economics in 3–5 years?

You’re not just automating tasks. Generative AI is potentially changing who can compete in your sector, at what cost and how quickly new offerings appear. Understanding the structural shift matters more than chasing short-term efficiency gains.

What specific workflows are our top AI bets?

Move from vague ambition to a prioritised shortlist of 5–10 concrete workflows where AI can unlock material value. Each should have a clear owner, defined metrics and an expected time-to-value. If you can’t name them, you don’t have a strategy.

Are we balancing value creation with risk and responsible AI by design?

For each use case, what are the key risks, data protection, bias, hallucination, IP, regulatory compliance, and what controls are in place? Designing risk management in from the start is vastly cheaper than bolting it on after something goes wrong.

How should we organise for AI across the business?

Central team, centres of excellence, which operating model lets you scale AI safely and consistently? Without clarity on who owns standards, platforms and delivery, you end up with fragmented pilots, duplicated spend and inconsistent risk practices.

Do we have the capabilities and talent to deliver our AI roadmap?

Strategy fails without people who can translate business problems into AI solutions, integrate them into existing systems and manage ongoing improvement. What skills are missing, technical, commercial, change management, legal, and what’s the plan to close the gap?

How will AI reshape our workforce and culture, and how are we preparing people?

AI adoption is a people and culture shift, not just a technology project. How will roles change? How will you support affected employees? How will you build an AI-ready culture where people see opportunity rather than threat? Get this wrong and even the best technology will stall.

What data assets and partnerships do we need to win?

Foundation models are becoming commodity. Your edge comes from unique data, smart integration and the ecosystems you plug into. Which proprietary data assets, platforms and external partnerships are critical to your AI advantage?

How will we measure success, and know when to stop or scale?

Define success metrics for each AI initiative and set clear thresholds to pivot, kill or scale. This frames AI as an investment with accountability, not a sunk-cost experiment. Disciplined scaling beats endless piloting every time.

How will we keep our AI roadmap current as technology and regulation evolve?

AI moves fast. Your board expects a living roadmap, not a strategy deck that’s out of date within twelve months. Build in continuous scanning, regular roadmap reviews and the agility to adapt as the landscape shifts.

Questions That Hold Organisations Back

Questions that sound strategic but actually dodge the hard work of value, change and risk.

If these questions dominate your leadership conversations about AI, you may be focusing on the wrong things. Here’s why each one is a dead end and what to ask instead.

How do we add AI to everything?

This treats AI as a feature to bolt on rather than a capability to solve specific problems. It leads to vanity pilots and “AI-washing” instead of focusing on a few high-value, well-designed use cases tied to strategy. Ask instead: Where will AI create the most measurable value for us?

How much will our productivity improve, exaclty?

Obsessing over a percentage before defining the use case is a distraction. You need to decide which processes to target, who will manage them and what training is required. Ultimately, the impact depends on the quality of the implementation, not just the tool.

Could AI help us lower our overheads quickly?

Framing AI primarily as a redundancy lever both underestimates the change effort and damages trust across your organisation. It also misses the bigger prize: revenue growth, innovation and quality improvement usually deliver more value than blunt cost-cutting. Ask instead: How can AI help our people deliver more value?

What AI tool should we buy first?

Tool-first thinking skips over business objectives, data readiness and process health. Organisations end up with expensive platforms layered on top of broken workflows, then blame the technology when nothing improves. Ask instead: What business problem are we solving, and what does our data look like?

Are we going to be replaced by AI?

Understandable but not actionable at an organisational level. The more useful lens is: which tasks within which roles will shift, and how do we redesign jobs, skills and structures accordingly? The answer is almost always augmentation, not wholesale replacement.

Is this AI model 100% accurate and safe?

Generative AI will always have some rate of hallucination and bias. The sharper question is: what level of error is tolerable for each specific use case, and what human oversight and controls are required to manage it?

Can AI fix our data and process problems?

AI amplifies whatever you already are. It doesn’t magically fix poor data governance, unclear ownership or broken processes. If you automate chaos, you just get faster chaos. Fix the foundations first, then apply AI to genuinely improve them.

What are our competitors doing with AI, and how do we copy it?

Benchmarking is useful, but copy-paste thinking ignores your unique customers, assets and constraints. It pulls attention away from the harder work of defining where you specifically win with AI. Your competitive advantage won’t come from doing the same thing as everyone else.

Can we just delegate AI to IT or the data team?

Pushing AI off to a technical silo frames it as a technology project rather than a business and operating model shift. Without executive ownership of strategy, risk, incentives and culture, even technically excellent work will stall at the pilot stage.

How do we avoid AI entirely until it’s fully regulated and mature?

Total avoidance feels safe but can be more dangerous, it means missing the learning curve while competitors build capability and institutional knowledge. The better approach is controlled experimentation with clear guardrails and governance, not a blanket freeze.

Five Questions About Your Data Before The AI Pilot

Before the model, before the vendor demo, before the business case.

Most organisations start their AI journey with a tool. A chatbot. A copilot. A vendor demo that looks impressive on a Tuesday afternoon. Very few start by asking whether the data underneath any of it is fit for purpose.
That is not a criticism. It is a pattern. AI is exciting. Data housekeeping is not. But the organisations that get real, lasting value from AI are the ones that ask the unglamorous questions first.
Here are five. They cost nothing to ask and they will tell you more about your AI readiness than any maturity model or technology roadmap.

Do you know where your data lives?

Not in the abstract. Specifically. Customer data in three CRMs. Financial data in spreadsheets emailed between departments. Operational data locked inside a system that only one person understands. Most organisations have data spread across more locations than anyone has fully mapped, and AI does not perform well on data it cannot see or reach.

The honest test: Could you sketch, in fifteen minutes, a map of where your most important business data sits today, who owns it, and how it moves between systems? If not, AI will inherit that confusion.

How clean is it, really?

Duplicate records. Inconsistent naming conventions. Fields left blank for years. Legacy data migrated from a system that was decommissioned in 2017. These are not edge cases. They are the norm in most organisations, and they matter because AI amplifies whatever it is given. Feed it clean, well structured data and it can accelerate decisions. Feed it noise and it will produce confident, well formatted noise.

The honest test: Pick one dataset you would feed into an AI tool tomorrow. How many fields are incomplete, outdated, or duplicated? If you do not know, that is the answer.

Who is accountable for data quality?

In many organisations, data quality is everyone’s problem and therefore nobody’s responsibility. IT manages the infrastructure. Business teams generate the data. Nobody owns the accuracy, completeness, or fitness of the data for the decisions being made from it. When AI enters the picture, this gap becomes visible fast, because AI outputs are only as trustworthy as the inputs they are built on.

The honest test: If a customer‐facing AI tool gave a wrong answer traced back to bad data, who in your organisation would be accountable? If the answer is unclear, you have a governance gap, not a technology gap.

Is your data documented, or does it live in someone’s head?

Institutional knowledge is valuable. Undocumented institutional knowledge is fragile. If the meaning of a column in a spreadsheet, the logic behind a categorisation, or the context for why certain records were excluded exists only in the memory of one or two team members, then your AI pilot has a single point of failure that no technology can fix.

The honest test: If the person who best understands your core data left the business tomorrow, how much context would walk out the door with them?

Can you explain what “good” looks like for your data?

This is the question that separates organisations experimenting with AI from organisations ready to deploy it. “Good” means you can define what complete, accurate, and timely looks like for the specific data feeding a specific use case. Without that definition, there is no way to evaluate whether an AI system is improving decisions or simply producing plausible outputs from a shaky foundation.

The honest test: For the AI use case you are most excited about, can you write down in one paragraph what “good enough” data looks like? Not perfect. Good enough. If you cannot, the pilot is premature.