Eight Mistakes Organisations Make When Adopting AI

I took part in a webinar on AI and publishing earlier today, hosted by the Crius Group, alongside my friends and colleagues Cameron Drew and Simon Mellins. One of the topics that came up was what mistakes we saw organisations making with AI. The conversation was about publishing, but the more we talked, the clearer it became that these failure modes aren’t industry-specific. They show up everywhere. Here are eight of the most common. If you recognise more than two or three of them in your organisation, you may not have an AI problem—you may have a strategy and operating model problem.

  1. Defaulting to whatever AI ships with their existing software—Microsoft Copilot, Google Gemini, or whatever is bundled into the enterprise stack. It might be the right tool for the job, but that should be an active decision, not a passive one. The bundled option optimises for platform integration, not necessarily for your specific use case.
  2. Using AI out of the box without any preparation. You wouldn’t put a new employee to work on day one without an induction, reference documents, style guides, and some shared understanding of what good looks like. Yet organisations routinely hand staff an LLM with no preparation and expect perfect results from both employee and AI model. There’s a simple litmus test I use here: open a user’s LLM’s settings and check whether anyone has provided custom instructions. If that field is blank, they are running on defaults.
  3. Layering AI on top of substandard workflows. If the underlying process is broken, adding AI doesn’t fix it—it amplifies it. This is like building on uneven foundations: each layer you add magnifies the variance. Organisations that haven’t done the hard work of mapping and improving their workflows before introducing AI tools tend to get faster bad outputs rather than better ones.
  4. Expecting AI to fill in gaps in their own thinking. I watched someone use AI for data analysis recently and treat it as a magic calculator—put in cursory instructions, assume the model would match their unspoken assumptions, and accept the output without scrutiny. LLMs are generally good at executing clearly articulated intent, but not good at reading minds. The quality of the output is bounded by the quality of the input, and that means doing the thinking first.
  5. Not distinguishing high-stakes from low-stakes use cases. Too many rganisations tend to treat all AI usage as equivalent—either everything needs sign-off from legal, which kills adoption, or nothing gets scrutinised, which creates risk. The more useful frame is a tiered approach: what’s the cost of an error here? Drafting internal meeting notes and reviewing a time-sensitive contract are fundamentally different risk profiles and should be governed differently.
  6. Treating AI as a project rather than a capability. Organisations stand up a pilot, run it for a quarter, write a report, and move on. But AI adoption isn’t a project with a completion date: it’s a capability that needs ongoing investment in skills, infrastructure, and institutional learning. The pilot mentality often leads to a kind of purgatory where nothing ever scales because nobody planned for what comes after the proof of concept.
  7. No internal knowledge-sharing mechanism. One team figures out a brilliant workflow; another team three desks away reinvents the wheel badly. Most organisations have no way to surface what’s working, share effective configurations, or build collective competence. The learning stays siloed with whoever happened to experiment first—and the organisation never compounds its advantage.
  8. Having no way to measure impact or quality. If you can’t say how you’d know whether AI is making things better, you can’t distinguish genuine improvement from the mere appearance of productivity. This is where organisations most commonly fall short: they adopt tools enthusiastically but never establish a baseline, define success criteria, or build in any mechanism for review. Without measurement, you’re flying without instruments—and you’ll never build the internal evidence base you need to scale what works and stop what doesn’t.

What connects all eight of these is a common underlying error: treating AI adoption as a technology problem rather than an organisational one. The tools are powerful, but they don’t do any of the hard work for you—the work of choosing the right tool, preparing it properly, fixing what’s already broken, thinking clearly about what you want, governing appropriately for the stakes involved, building institutional knowledge, and establishing how you’ll know whether it’s working. Organisations that skip this work don’t fail because AI isn’t good enough. They fail because they haven’t built the conditions for it to succeed.

Written on March 31, 2026