Every leadership session eventually circles back to it. More and more we are hearing this from clients.
“We need an AI strategy.”
“What are our AI use cases?”
“How are we showing progress on AI?”
The pressure is understandable. The market is loud, competitors are announcing things and boards are asking questions.
But the reality is that most AI initiatives start without a clearly validated problem. It's technology solutions looking for use cases.
Sprinkling AI like 🎊 confetti 🎊 over isolated use cases rarely creates value. Solving real, material problems does.
If you cannot clearly describe the problem, the evidence behind it, and the economic impact of solving it, you are not ready to discuss AI.
Always Start with the Problem
Strong product teams begin with evidence.
- What customer behaviour indicates friction?
- What data shows this is a high-frequency or high-value problem?
- What is the quantified cost of leaving it unsolved?
Too often, AI ideation sessions skip this step.
In enterprises, I’ve seen full roadmaps built around AI features where none of the underlying problems were validated. A list of “AI-enabled” initiatives looks impressive in a strategy deck. Underneath it, no one can clearly articulate the customer pain or the commercial upside.
In scale-ups, I’ve seen leadership teams attempt to “AI the whole product” when the real issue was inefficient workflows and manual back-office processes that could have been improved with straightforward automation.
In large organisations, I’ve watched millions spent building internal AI platforms without speaking to customers to understand whether the proposed capabilities were even desirable.
That is AI confetti. It looks exciting when scattered across a roadmap but it rarely creates sustained value.
Clayton Christensen’s Jobs to Be Done framework remains a useful lens. Customers hire products to make progress. If you cannot clearly articulate the job and where progress is breaking down, you are guessing. And, guessing is expensive when the solution involves probabilistic systems, new infrastructure, and governance complexity.
If anything, AI raises the bar. The margin for sloppy thinking is smaller because the cost of implementation can end up higher.
Think of AI as a Capability
When teams start with “Where can we apply AI?”, they default to surface level enhancements.
Ethan Mollick (https://www.oneusefulthing.org/p/working-with-ai) has written about how generative AI performs impressively in controlled settings but becomes unpredictable in complex environments. The gap between prototype and production is where many organisations lose discipline.
You need someone (often a Product Manager) to continually ask what measurable outcome are we improving, and by how much?
If the initiative does not clearly tie to revenue, retention, margin, or risk reduction, it is unlikely to justify the complexity.
Then, is AI the Right Capability for this Problem?
AI works well for:
- Pattern recognition across large datasets
- Prediction under uncertainty
- Language generation and classification at scale
It works poorly when:
- The process itself is broken
- Data is inconsistent or sparse
- Deterministic rules would achieve similar outcomes
- Error tolerance is low and explainability is critical
Ben Evans (https://www.ben-evans.com/benedictevans/2023/ai-and-the-next-platform) has described AI as a shift in computing capability, not a replacement for structured thinking and system design. It expands options and it does not eliminate trade-offs.
If a simpler intervention solves the validated problem, choose it.
Raise the Bar on Validation
Eric Ries’ principle of validated learning (https://hbr.org/2013/05/lean-startup-methodology) is even more relevant in an AI context. Form a hypothesis, define leading indicators, run controlled tests and measure impact.
- Validate that the problem is worth solving
- Validate that users will trust and adopt an AI-driven solution
- Validate that the economics justify infrastructure and oversight
- Validate that you can operate and monitor the system safely
This is not about being conservative, it is about being disciplined.
AI introduces probabilistic outputs into systems that may previously have been deterministic. That increases complexity, governance requirements, and risk exposure. The evidence threshold should rise accordingly.
AI is powerful but it is also costly, complex, and often misapplied.
Organisations of every size are vulnerable to chasing AI as a signal of progress. The ones that create advantage will do something less fashionable: identify real problems, gather evidence, quantify impact, and only then decide whether AI is the right tool.
Identify the problem first, prove it matters and then choose the solution.