As a product operating model consultant, one way to go fast is to standardise how we approach our engagements. Less building the plane while flying it. It would be easier, and we'd move quicker, but every time we consider this at Propel, we just can't bring ourselves to do it. We find that anything close to a "cookie-cutter" approach just misses too much context. There is nuance everywhere we look and while we are expert pattern-matchers, there are always enough quirks in every organisation that even if we aren't building the plane from scratch, we're making some pretty serious modifications.
While I'm talking about operating model and strategy consulting here, I'm starting to feel the same about the AI-assisted SDLC work we're doing for our clients.
If we standardise everything for everyone, how do we make sure the sense-making muscle is still working?
An AI-ified SLDC is exceptional at the execution layer:
Tools like GitHub Copilot, Linear AI, and Cursor aren't hype are genuinely compressing the cost of building. This is a real unlock, but it creates a dangerous illusion: that going faster means you're going better.
The same tools that accelerate execution still struggle with the layer that determines whether any of that execution was worth doing.
The context layer looks like this:
I once heard (thanks Imperfects Podcast and Hugh van Cuylenburg) that the answer to everything is exercise. Good advice I often come back to.
For product, I'd say the answer to almost everything is customer insight, as in, actually talking to them. You can't answer those questions with a prompt, they require sitting with a customer in a messy conversation, interpreting incomplete signals, and holding competing priorities together in your mind.
AI can help you move through the solution space faster. But I don't think it can help you find the right problem...yet. While AI can provide the scale, in the Propel model, humans still provide the judgment.
Good standardisation compresses context into reusable patterns. Templates, playbooks, and repeatable processes are valuable when:
Standardisation becomes dangerous when:
Applying execution-layer thinking to a discovery-stage problem is one of the most common (and most expensive) mistakes we see product organisations make.
AI does amplifly the risk of focusing too much on outputs rather than outcomes (customer and business value). If your team didn't have strong discovery habits before, giving them AI execution tools doesn't solve that, it just lets them build the wrong thing faster.
Genuinely understanding your customer is an ongoing discipline. In practice, it looks like:
This all gets more valuable as execution gets cheaper.
We should absolutely be using AI and automation, that's not the question. The point is that it creates more time for the work that actually matters: understanding users, framing problems, making strategic bets, and navigating the organisation.
The how is being automated. That frees us to double down on the why.
That's where competitive advantage lives and I know I'm not alone when I say that AI is not going to commoditise it time soon.