Insights | Propel Ventures

Sense-Making v Speed in an AI'ified SDLC World

Written by Amy Johnson | Chief Product Officer Propel | Mar 19, 2026 5:59:10 AM

Speed vs. Sense-Making

As a product operating model consultant, one way to go fast is to standardise how we approach our engagements. Less building the plane while flying it. It would be easier, and we'd move quicker, but every time we consider this at Propel, we just can't bring ourselves to do it. We find that anything close to a "cookie-cutter" approach just misses too much context. There is nuance everywhere we look and while we are expert pattern-matchers, there are always enough quirks in every organisation that even if we aren't building the plane from scratch, we're making some pretty serious modifications.

While I'm talking about operating model and strategy consulting here, I'm starting to feel the same about the AI-assisted SDLC work we're doing for our clients.

If we standardise everything for everyone, how do we make sure the sense-making muscle is still working?

The Execution Layer Is Getting Automated. Fast

An AI-ified SLDC is exceptional at the execution layer:

  • Writing tickets and acceptance criteria from rough inputs
  • Generating code and scaffolding
  • Test automation and regression coverage
  • Documentation of existing systems and decisions
  • Solution design for well-understood problem classes

Tools like GitHub Copilot, Linear AI, and Cursor aren't hype are genuinely compressing the cost of building. This is a real unlock, but it creates a dangerous illusion: that going faster means you're going better.

What AI Can't Do: The Context Layer

The same tools that accelerate execution still struggle with the layer that determines whether any of that execution was worth doing.

The context layer looks like this:

  • Why does this problem actually matter to this customer, right now?
  • Which customer segment is worth prioritising — and which is a distraction?
  • Is this solution genuinely valuable, or does it just satisfy the brief?
  • What are the organisational constraints and politics that will make or break adoption?

I once heard (thanks Imperfects Podcast and Hugh van Cuylenburg) that the answer to everything is exercise. Good advice I often come back to.

For product, I'd say the answer to almost everything is customer insight, as in, actually talking to them. You can't answer those questions with a prompt, they require sitting with a customer in a messy conversation, interpreting incomplete signals, and holding competing priorities together in your mind.

AI can help you move through the solution space faster. But I don't think it can help you find the right problem...yet. While AI can provide the scale, in the Propel model, humans still provide the judgment.

 

Standardisation Isn't the Enemy of Context

Good standardisation compresses context into reusable patterns. Templates, playbooks, and repeatable processes are valuable when:

  • The problem is well understood and stable
  • Customer needs are clear and unlikely to shift dramatically
  • The cost of inconsistency is high — regulated industries, enterprise contracts, multi-product coherence

Standardisation becomes dangerous when:

  • You're still exploring the problem space and don't yet know what good looks like
  • Customer needs are ambiguous or evolving faster than your delivery cycle
  • Decisions are hard to reverse — architectural choices, pricing models, go-to-market positioning

Applying execution-layer thinking to a discovery-stage problem is one of the most common (and most expensive) mistakes we see product organisations make.

Speed Without Sense-Making

AI does amplifly the risk of focusing too much on outputs rather than outcomes (customer and business value). If your team didn't have strong discovery habits before, giving them AI execution tools doesn't solve that, it just lets them build the wrong thing faster.

Genuinely understanding your customer is an ongoing discipline. In practice, it looks like:

  • Regular, structured customer conversations — not just support tickets and NPS scores
  • Interpreting messy, incomplete signals — what customers say vs. what they do vs. what they actually need
  • Judgement calls under uncertainty — making a call without waiting for perfect data
  • Organisational navigation — knowing whose buy-in you need, and why, before the wrong person kills a good initiative

This all gets more valuable as execution gets cheaper.

The Implication for Product Leaders

We should absolutely be using AI and automation, that's not the question. The point is that it creates more time for the work that actually matters: understanding users, framing problems, making strategic bets, and navigating the organisation.

The how is being automated. That frees us to double down on the why.

That's where competitive advantage lives and I know I'm not alone when I say that AI is not going to commoditise it time soon.