Strategic Debt and the Hidden Cost of AI Adoption
Ian Mulvany has a good piece in the latest issue of InPublishing examining challenges and opportunities for publishers. A couple of his points really stuck with me. Under pressure from investors to signal progress, companies are forcing AI into products and services without a clear sense of value. The result is a proliferation of new features that are individually justifiable but risk being collectively incoherent. And while generative AI can speed up the creation of content, code or even products, it doesn’t create additional time—in other words, opportunity cost still applies. I’ve started thinking of this pattern as strategic debt.
If every new AI initiative is not a free gain but an opportunity cost—and an ongoing commitment by the organisation to support its outputs, then those commitments start to look like a form of debt. Not technical debt—short-term trade-offs to ship faster, in the knowledge that they will need to be repaid. Nor cognitive debt, as framed by John Willshire—getting an answer without doing the thinking, and being unable to explain how you arrived at it. But something distinct from both: strategic debt. This is something more than just opportunity cost, which applies on a per-decision level. Strategic debt compounds with the number of decisions, and as it accumulates, it constrains future flexibility and consumes organisational capacity.
These levels of debt can be thought of as a stack. At the base is technical debt: shortcuts in code, data and architecture. Above it sits strategic debt: decisions that create ongoing commitments across teams and systems. At the top is cognitive debt: the loss of understanding that makes it harder to reason about the organisation as a whole.
The three levels interact, and causality flows in both directions. Cognitive debt can lead to poorly considered strategy or weak implementation—sometimes without anyone recognising the weakness. Strategic debt can create further commitments that in turn generate technical debt.
But in some cases, the use of AI means you can incur relatively little technical debt while still taking on significant strategic and cognitive debt. The developers I work with are generally very alert to technical debt. But this pattern—low technical debt, high strategic and cognitive debt—is relatively new, and the leaders commissioning AI work may be even less likely to spot it than the developers building it. I’m not convinced it will be picked up by existing review mechanisms.
I remain optimistic about what AI can deliver. But Mulvany is right to caution against the assumption that it will simply make everything more efficient. The implication is that, unmanaged, it may do the opposite—not by breaking systems, but by making them harder to understand and therefore harder to control.
What does this mean in practice?
As the marginal cost of adding AI falls, the temptation is to do more. But if strategic debt has a similar effect on decision-making as unfactored code has on products, the smarter move may be to do less. When evaluating AI use cases, consider the debt implications alongside the anticipated value. I’ve often used the RICE prioritisation formula with clients to help prioritise ideas—(Reach × Impact × Confidence) ÷ Effort. Adding Debt as a variable—so the divisor becomes (Effort + Debt)—seems like a useful extension that I’m going to try. How one would quantify Debt on a comparable scale to Effort is a question I’m still working through—but even asking the question changes the conversation. At minimum, it means asking: how reversible is this, what ongoing commitments does it create, and can we still explain our reasoning without it?
The organisations that get this right will not just be more explainable. They will be more contestable, more adaptable, and more strategically coherent.