Australia's financial regulator has identified serious governance gaps in how banks and superannuation trustees manage AI agents. The Australian Prudential Regulation Authority conducted a targeted review of large regulated entities in late 2025 and found that AI governance and assurance practices lack adequate oversight.
Financial firms are rapidly deploying AI agents in both internal operations and customer-facing services. However, most institutions have not established proper control frameworks to manage these systems. The regulator's findings reveal a disconnect between AI adoption speed and governance maturity.
The warning signals that regulators expect firms to strengthen oversight of AI agent deployment. This includes implementing assurance practices, monitoring systems for errors or bias, and establishing clear accountability chains. Banks and superannuation trustees must demonstrate they understand the risks their AI systems pose to customers and financial stability.
The review reflects broader global concerns about AI governance in regulated industries. Financial regulators worldwide are tightening requirements as AI systems become more autonomous and integrated into critical business processes. Australia's approach suggests regulators will mandate specific governance standards rather than allowing firms to self-regulate.
Regulated entities now face pressure to audit their AI practices and implement stronger controls before regulators impose formal requirements or penalties.
