In discussions with Levi Watters, Partner at KPMG Australia, and Lachlan Hardisty, Director of the AI Lab in KPMG Futures, a consistent theme emerged: the focus is on making AI agents work at enterprise scale, not running isolated pilots or demonstrations.
Their approach centres on treating agents as part of the operating model — with clear ownership, defined standards, and measurable outcomes.
1. Define the AI operating model first
Before building agents, Levi and Lachlan start with organisational clarity rather than use cases.
They focus on a small set of foundational questions:
- Whether agents should be built internally or integrated from third parties
- Whether agents should live on a shared platform or be deployed as standalone tools
- How AI strategy, digital strategy, and workforce strategy fit together — including which work should be handled by agents versus humans
Without this alignment, they consistently see fragmented pilots, unclear ownership, and no clear path to scale.
2. The challenge is adoption, not pilots
From their perspective, the issue is no longer whether agents work.
Most large enterprises have already run multiple POCs, and real ROI is being achieved. The bottleneck has shifted to how organisations move from proof-of-concept into enterprise rollout and how ROI is tracked over time.
Agents can generate outputs easily. What determines success is whether accuracy standards are clearly defined and whether agents are integrated into real workflows. Without that, pilots may look successful in isolation but fail once exposed to day-to-day operations.
3. Build foundations early
The organisations now seeing sustained value tend to follow a consistent set of patterns.
They:
- Use agents to assess data readiness and identify gaps
- Use AI to uplift and standardise documentation rather than waiting for perfect inputs
- Expose systems through micro-APIs so agents can take action
- Clearly document processes and define what “good” output looks like
In practice, agents often force teams back to first principles — a step many organisations had previously skipped.
4. Treat agents like workforce units
Levi and Lachlan explicitly frame agents as part of the workforce, but they manage them differently from human roles.
That means defining:
- Clear responsibilities and success criteria
- Ongoing accuracy management
- Reliability, escalation paths, and monitoring
This leads to a combined digital and workforce strategy, where operating models are redesigned around collaboration between humans and agents rather than simply deploying new software.
5. Avoid sprawl with a horizontal platform
To address concerns around vendor lock-in and tool sprawl, the approach prioritises a horizontal platform model.
The intent is to:
- Stay vendor-agnostic as LLM leaders change rapidly
- Orchestrate multi-agent systems rather than isolated tools
- Maintain traceability, governance, and consistency in how agents are built and deployed
This reduces fragmentation and shortens the time it takes to move from build to production.
6. Make adoption the primary KPI
Levi is clear that, in most cases, the technology is already sufficient. What matters is whether it is actually used.
Their focus is on:
- Embedding agents directly into existing workflows
- Redesigning processes so they assume agents exist
- Measuring real usage and impact, not just technical success
Internally and with clients, adoption is reinforced through visible leadership use and hands-on training where leaders build simple agents themselves.
7. Focus on back-office value
Rather than prioritising surface-level automation, the strongest and most defensible ROI is showing up in back-office and knowledge-heavy functions, including:
- Finance
- Risk and compliance
- Procurement
- Operations and broader knowledge work
These areas tend to have clearer baselines, structured data, and processes that benefit most from standardisation and agent-driven automation.
8. Govern through monitoring, not policy alone
Beyond formal governance frameworks, the emphasis is on what happens in production.
That includes:
- Continuous monitoring of agent behaviour
- Transparency for senior leaders
- Technical logging and evaluation
- Commercial safeguards and clear reporting
Trust is built through visibility into how agents behave in the real world, not just through policy documents.
9. Prepare for accelerating gaps
Looking ahead, Levi and Lachlan expect pressure on ROI to intensify, with boards demanding clear returns rather than experimentation. They also expect non-technical teams to become heavy users of agents.
As a result, the performance gap between early adopters and laggards is likely to widen. They also anticipate more agent-to-agent interactions in areas such as procurement, invoicing, and parts of sales and contracting.
Taken together, these practices show how KPMG is turning AI agents from interesting prototypes into a repeatable, enterprise-grade operating model.
Build your AI Workforce
Grow your business, not your headcount





