Australia's financial regulator has flagged serious gaps in how banks and superannuation trustees govern AI agents. The Australian Prudential Regulation Authority conducted a targeted review of large regulated entities in late 2025 and found that AI governance and assurance practices remain poorly managed.

Financial firms are rapidly expanding AI use across internal operations and customer-facing services. Yet most have not established adequate oversight frameworks to manage the risks these systems introduce. The regulator's warning signals that current governance structures lag behind the pace of AI deployment in the financial sector.

Banks and superannuation trustees now operate AI agents that make decisions affecting customers and business operations. Without proper controls, these systems create exposure to operational failures, compliance breaches, and customer harm. The APRA review identified control gaps across multiple institutions, indicating the problem is widespread rather than isolated.

The regulator's focus on AI agent governance reflects growing concerns about autonomous systems operating with insufficient human oversight. Financial institutions must now address assurance gaps and implement robust frameworks to govern how AI agents function. The warning serves as a direct call for banks and trustees to strengthen AI controls before regulators enforce stricter requirements.