3 Sources
[1]
Australia Regulator Threatens Enforcement for Poor AI Controls
APRA called on companies to ensure credible fall-back processes and robust security testing across AI-generated code, and to prepare for timely action when AI tools are not operating as expected. Australia's top prudential regulator said it will take action against companies that fail to adequately control cybersecurity threats, as concerns within the industry mount over Anthropic PBC's latest AI model Mythos. The Australian Prudential Regulation Authority is finalizing a plan to supervise artificial intelligence risks, following a review of banks, insurers and retirement funds conducted late last year that identified several shortcomings. These include information security practices struggling to keep pace with AI threats and over-reliance on third party AI vendors, according to the regulator. "Where entities fail to adequately identify, manage or control AI risks in a manner proportionate to their size, scale and complexity, we will take stronger supervisory action and, where appropriate, pursue enforcement," APRA said in a letter published on Thursday. The comments from Australia reflect the urgency with which regulators around the world are acting to spur companies to strengthen their AI defenses, as the technology rapidly advances. APRA is engaging across the sector on the potential for heightened "cyber threats from high capability AI frontier models such as Anthropic Mythos," according to the letter from Therese McCarthy Hockey, an executive board member. It has heard clear recognition from regulated entities of the need for a step change in cyber practices in an "evolving threat environment," the letter stated. Asia Regulators Raise Scrutiny on Banks Amid Mythos AI Fears Japan Top Officials to Meet Banks to Discuss Mythos Threat Anthropic, Trump Officials Meet to Discuss Mythos Access APRA called on companies to ensure there are credible fall-back processes where the AI technology supports critical operations and called for "robust security testing across AI-generated code." In other areas of weaknesses, the watchdog warned about supplier concentration, where firms are heavily reliant on a single provider for multiple AI use cases. There's an over-reliance on "vendor presentations and summaries without sufficient examination of key AI risks such as unpredictable model behaviour and the impact on critical operations," according to the letter. Among its expectations, APRA said companies should prepare for "timely action" when AI tools are "not operating as expected."
[2]
Australia calls for stronger AI risk controls at financial firms
April 30 (Reuters) - Australia's prudential watchdog warned on Thursday that many financial firms still lack the technical knowledge needed to effectively challenge AI-related risks, while calling for an overhaul in AI-related risk procedures. In a letter to the financial industry, the â Australian Prudential Regulation Authority (APRA) highlighted findings from last year's supervisory review, indicating that information security practices are struggling to keep up with the pace of change. The regulator specifically flagged frontier artificial intelligence models, such as Anthropic's Claude Mythos, as tools that could enhance â the discovery of vulnerabilities by bad actors, thereby increasing the "probability, speed and scale of cyber attacks." APRA is currently finalising its forward plan with â regards to the supervision of AI risks, it said in a statement. "APRA will continue and monitor â the use of AI to assess potential prudential risks and consider whether further â APRA policy action may be needed," the regulator said. Reporting by Kumar Tanishk in Bengaluru; Editing by Tasim Zahid and Muralikumar Anantharaman Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
Australian regulator warns of ungoverned AI risks to banks
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. In a letter to industry, Apra warns that governance, risk management, assurance and operational resilience practices are not keeping pace with the scale, speed, and complexity of AI adoption. The letter outlines the findings of a targeted supervisory review Apra undertook late last year, examining how AI was being deployed and governed. The review noted that the expanded use of advanced AI is introducing a range of new financial and operational vulnerabilities for firms, but that information security practices are struggling to keep up with the pace of change. It also warns that frontier AI models such as Anthropic's Claude Mythos, which could enhance the discovery of vulnerabilities by bad actors, are expected to further increase the probability, speed and scale of cyber attacks. Apra noted that bank boards have strong interest for AI's potential benefits but many lack the technical literacy required to provide effective challenge to management on AI related risks and oversight. It also detected heightened concentration risk, with some firms heavily dependent on a single provider for multiple AI use cases and gaps in contingency planning. Apra further observed that AI functionality is often embedded within broader software platforms or developer tooling, reducing transparency over where and how models are trained, updated or constrained and limiting firms' ability to completely assess and manage risks. The regulator says that existing change and assurance management approaches are often fragmented and may not effectively provide sufficient assurance for AI. "We cannot be blind to the risks of such powerful technology - whether in our own hands or the hands of those with malign intent," states Apra Member Therese McCarthy Hockey. "What we've observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren't keeping up. Likewise, the speed at which entities can identify and patch vulnerabilities needs to operate much faster, commensurate with the AI-accelerated threat." The regulator is not proposing to introduce additional requirements at this stage says McCarthy Hockey, but does expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it.
Share
Copy Link
Australia's prudential regulator APRA has warned it will take enforcement action against financial firms failing to control AI risks, following a review that found information security practices struggling to keep pace with threats from frontier AI models like Anthropic's Mythos. Banks, insurers, and retirement funds face scrutiny over their over-reliance on third-party AI vendors and inadequate governance systems.
The Australian Prudential Regulation Authority has escalated its stance on AI risks, threatening enforcement action against financial firms that fail to implement adequate controls. In a letter published on Thursday, APRA warned that it will pursue stronger supervisory action and, where appropriate, enforcement against entities that do not adequately identify, manage, or control AI risks in a manner proportionate to their size, scale, and complexity
1
. The move reflects mounting urgency among global regulators as AI technology advances rapidly and introduces new vulnerabilities across the financial system.APRA's warning follows a targeted supervisory review conducted late last year examining how banks, insurers, and retirement funds are deploying and governing artificial intelligence. The review identified several critical shortcomings, including inadequate information security practices struggling to keep pace with AI threats and over-reliance on third-party AI vendors
1
. The regulator found that governance, risk management, and operational resilience practices are not keeping up with the scale, speed, and complexity of AI adoption3
. Many financial firms still lack the technical expertise to manage AI effectively, with bank boards showing strong interest in AI's potential benefits but lacking the technical literacy required to provide effective challenge to management on AI-related risks2
.
Source: Reuters
APRA is actively engaging across the sector on heightened cybersecurity threats from frontier AI models such as Anthropic Mythos, according to the letter from Therese McCarthy Hockey, an executive board member
1
. The regulator specifically flagged these advanced models as tools that could enhance the discovery of vulnerabilities by bad actors, thereby increasing the probability, speed, and scale of cyber attacks2
. "We cannot be blind to the risks of such powerful technology - whether in our own hands or the hands of those with malign intent," stated McCarthy Hockey3
. The regulator has heard clear recognition from regulated entities of the need for a step change in cyber practices in an evolving threat environment.
Source: Bloomberg
Related Stories
Among the areas of weakness identified, APRA warned about concentration risk, where firms are heavily reliant on a single provider for multiple AI use cases
1
. The review detected gaps in contingency planning and an over-reliance on vendor presentations and summaries without sufficient examination of key AI risks such as unpredictable model behavior and the impact on critical operations1
. The regulator noted that AI functionality is often embedded within broader software platforms or developer tooling, reducing transparency over where and how models are trained, updated, or constrained, and limiting firms' ability to completely assess and manage risks3
.APRA has called on companies to ensure credible fall-back processes where AI technology supports critical operations and demanded robust security testing across AI-generated code
1
. Among its expectations, the regulator said companies should prepare for timely action when AI tools are not operating as expected1
. APRA is currently finalizing its forward plan for supervising AI risks and will continue to monitor the use of AI to assess potential prudential risks and consider whether further policy action may be needed2
. While the regulator is not proposing additional requirements at this stage, McCarthy Hockey emphasized that APRA expects to see significant improvement in how entities close the gaps between the power of the technology they are using and their ability to monitor and control it3
.Summarized by
Navi
[3]
20 Apr 2026â¢Policy and Regulation

29 Oct 2024â¢Business and Economy

10 Oct 2025â¢Policy and Regulation

1
Health

2
Technology

3
Policy and Regulation
