Skip to content
Daniel Hurst | Head of AI at Propel VenturesNov 26, 2025 10:00:13 AM3 min read

The AI trust gap: Why 66% use it but only 46% trust it

Your team uses AI daily. They draft emails with ChatGPT, summarise reports, generate analysis. But ask them if they trust the output—really trust it—and watch the hesitation.

This tension isn't unique to your organisation. A 2025 KPMG/University of Melbourne study across 48,000 people in 47 countries revealed the gap: 66% use AI regularly, but only 46% trust it.

That 20-point gap is costing you adoption, velocity, and value.

Trust doesn't come from reassurance

Most organisations try to close the trust gap with policies, governance frameworks, and compliance checkboxes. These matter but they don't build trust.

Trust comes from understanding.

When people know how AI works, what can go wrong, and how to verify outputs, they trust it more because they know when to trust it and when not to. They don't need permission to use it, they have the judgement to use it well.

We've seen this pattern across 100+ regulated organisations in finance, government, energy, and healthcare. The teams that get 3-5x more value from AI aren't the ones with the biggest budgets. They're the ones where literacy is distributed, not centralised.

Legal teams spot biased outputs before they cause harm. Finance verifies AI-generated forecasts instead of accepting them blindly. Product teams know when not to trust the model.

The result? They use AI more, not less. Awareness builds confidence. Confidence builds adoption.

The bottleneck isn't technology

Walk into any Australian boardroom and you'll hear the same story: strong enthusiasm for AI, ambitious pilots, but uncertainty about what happens next.

One team experiments with AI for customer insights while another debates blocking ChatGPT entirely. Executives call for an "AI strategy" without clarity on who owns it or how success will be measured.

The pattern is consistent: the gap isn't technical. It's literacy.

Not coding bootcamps. Not another tool rollout. But a shared understanding across the organisation of what AI can do, what it can't, who's accountable, and how to use it safely at scale.

Different roles need different literacy

AI doesn't succeed when everyone learns the same thing. It succeeds when every function develops the literacy their role demands.

Board members don't need to write prompts. But they absolutely need strategic foresight and critical intelligence to govern AI risk effectively.

Product managers need all five literacy domains—responsible use, applied fluency, critical intelligence, technical foundations, and strategic foresight—to balance innovation with ethical design.

Operations leads need responsible use, applied fluency, and critical intelligence to deploy AI tools safely and verify outputs before acting on them.

When literacy aligns with responsibility, AI stops being a project. It becomes part of how work happens.

The framework: Five domains that scale

In our new whitepaper, we outline the Five AI Literacy Domains Framework, developed through Propel's work helping organisations build AI capability across industries.

The framework addresses:

  • Responsible Use: Ethics, governance, and accountability in daily practice

  • Applied Fluency: Everyday confidence using AI tools to improve thinking and execution

  • Critical Intelligence: The discipline to verify, question, and contextualise AI outputs

  • Technical Foundations: Understanding AI mechanics without needing to be an engineer

  • Strategic Foresight: Connecting AI literacy to long-term competitive advantage

The whitepaper includes field lessons from 100+ regulated organisations, practical implementation guidance for every organisational level (individual, team, department, executive), and role-specific literacy maps showing which domains matter most for boards, product teams, operations, legal, and more.

Download the full whitepaper: "Why Do Some Organisations Achieve 10x More Value from AI Than Others?"

The organisations pulling ahead aren't waiting for perfect tools or complete clarity. They're building literacy now. Distributed, practical, and tied to real decisions.

The question isn't if your people will work with AI. It's how ready they'll be when they do.

avatar
Daniel Hurst | Head of AI at Propel Ventures
Shaping the future of AI adoption - driving change that’s ambitious, ethical, and grounded in results. At Propel, we help leaders move from experimentation to enterprise impact with confidence and control.

RELATED ARTICLES