7 Sources
7 Sources
[1]
MPs call for AI stress testing in financial services
Committee says watchdogs lack urgency as accountability for automated decisions remains unresolved UK financial regulators must conduct stress testing to ensure businesses are ready for AI-driven market shocks, MPs have warned. The Bank of England, Financial Conduct Authority, and HM Treasury risk exposing consumers and the financial system to "potentially serious harm" by taking a wait-and-see approach, according to a House of Commons Treasury Committee report published today. During its hearings, the committee found a troubling lack of accountability and understanding of the risks involved in spreading AI across the financial services sector. David Geale, the FCA's Executive Director for Payments and Digital Finance, said individuals within financial services firms were "on the hook" for harm caused to consumers through AI. Yet trade association Innovate Finance testified that management in financial institutions struggled to assess AI risk. The "lack of explainability" of AI models directly conflicted with the regime's requirement for senior managers to demonstrate they understood and controlled risks, the committee argued. The committee said there should be clear lines of accountability when AI systems produce harmful or unfair outcomes. "For instance, if an AI system unfairly denies credit to a customer in urgent need - such as for medical treatment - there must be clarity on who is responsible: the developers, the institution deploying the model, or the data providers." The committee also slammed the government for failing to implement the Critical Third Parties regime, which it introduced in January last year to give the FCA and the Bank of England the power to investigate non-financial firms that provide critical services to the UK financial services sector, including AI and cloud providers. "Over a year since the regime was established, it is not clear to us why HM Treasury has been so slow to use the new powers at its disposal. The Bank of England's Financial Policy Committee must monitor the regime's progress and, if necessary, use its power of recommendation to HM Treasury to ensure swift implementation," the report said. Financial services is one of the UK's most important economic sectors. In 2023, it contributed £294 billion to the economy [PDF], or around 13 percent of the gross value added of all economic sectors. However, successive governments have adopted a light-touch approach to AI regulation for fear of discouraging investment. Treasury Select Committee chair Dame Meg Hillier said: "Firms are understandably eager to try and gain an edge by embracing new technology, and that's particularly true in our financial services sector, which must compete on the global stage. "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk." ®
[2]
UK Lawmakers Call for AI Stress Tests on Banks as Risks Mount
The committee called on the FCA to publish guidance on the guardrails around AI in the financial sector by the end of the year, including identifying the individuals responsible, and for cloud providers to be designated critical industries to improve regulatory oversight. Britain's financial regulators should start stress-testing the risks posed by artificial intelligence because their current wait-and-see approach leaves the public and economy in danger of "serious harm," a group of lawmakers warned. The Bank of England, the Financial Conduct Authority and the Treasury are failing to keep up with the threat from AI, which is now used by more than three quarters of UK financial services firms, Parliament's Treasury Select Committee concluded in a new report. One obvious place to start is assessing the potential fallout from AI-driven market shocks, it said. Financial institutions are rapidly adopting AI for all kinds of services, from helping devise investment strategies to automating administrative functions and processing insurance claims. The committee said there may be "considerable benefits to consumers" but warned reliance on AI and cloud providers is a new stability risk. "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying," TSC Chair Meg Hillier said. "I want to see our public financial institutions take a more proactive approach to protecting us against that risk." The Members of Parliament called on the FCA, the consumer regulator, to publish guidance on the guardrails around AI in the financial sector by the end of the year, including identifying the individuals responsible. Cloud providers like Amazon Web Services and Google Cloud should also be designated critical industries to improve regulatory oversight, they added. Both cloud providers told MPs during the inquiry that they expect to be brought into the regulators' "critical third parties regime," with Google revealing it is already prepared, but the Treasury has yet to act. Underlining the threat, AWS was hit by a massive outage in October during the inquiry that knocked several companies, including Lloyds Banking Group, offline. Industry specialists warned that AI trading amplifies herding and therefore increases financial stability risks as well as heightening cyber security vulnerabilities. Jonathan Hall, an external member of the BOE's Financial Policy Committee, told lawmakers that AI-specific market stress test would be valuable. The Treasury responded with the announcement of two new "AI Champions" to spearhead the safe roll-out of AI in financial services. Harriet Rees from Starling Bank and Rohit Dhawan from Lloyds Banking Group will help firms seize opportunities "while protecting consumers and financial stability." They will report to Economic Secretary to the Treasury Lucy Rigby. A Treasury spokesman said the government wants to "strike the right balance between managing the risks posed by AI and unlocking its huge potential." Both the BOE and the FCA said they welcomed the TSC report and will consider the recommendations "carefully."
[3]
Britain needs 'AI stress tests' for financial services, lawmakers say
LONDON, Jan 20 (Reuters) - Britain's financial watchdogs are not doing enough to stop artificial intelligence from harming consumers or destabilising markets, a cross‑party group of lawmakers said on Tuesday, urging regulators to move away from what it called a "wait and see" approach. In a report on AI in financial services, the Treasury Committee said the Financial Conduct Authority and the Bank of England should start running AI‑specific stress tests to help firms prepare for market shocks triggered by automated systems. The committee also called on the FCA to publish detailed guidance by the end of 2026 on how consumer protection rules apply to AI, and on the extent to which senior managers should be expected to understand the systems they oversee. "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying," committee chair Meg Hillier said in a statement. TECHNOLOGY CARRIES 'SIGNIFICANT RISKS' A race among banks to adopt agentic AI, which unlike generative AI can make decisions and take autonomous action, runs new risks for retail customers, the FCA told Reuters late last year. About three‑quarters of UK financial firms now use AI. Companies are deploying the technology across core functions, from processing insurance claims to performing credit assessments. While the report acknowledged the benefits of AI, it warned the technology also carried "significant risks" including opaque credit decisions, the potential exclusion of vulnerable consumers through algorithmic tailoring, fraud, and the spread of unregulated financial advice through AI chatbots. Experts contributing to the report also highlighted threats to financial stability, pointing to the reliance on a small group of U.S. tech giants for AI and cloud services. Some also noted that AI‑driven trading systems may amplify herding behaviour in markets, risking a financial crisis in a worst-case scenario. An FCA spokesperson said the regulator welcomed the focus on AI and would review the report. The regulator has previously indicated it does not favour AI‑specific rules due to the pace of technological change. The BoE did not respond to a request for comment. Hillier told Reuters that increasingly sophisticated forms of generative AI were influencing financial decisions. "If something has gone wrong in the system, that could have a very big impact on the consumer," she said. Separately, Britain's finance ministry appointed Starling Bank CIO Harriet Rees and Lloyds Banking Group (LLOY.L), opens new tab 's Rohit Dhawan as "AI Champions" to help steer AI adoption in financial services. Reporting by Phoebe Seers; Editing by Tommy Reggiori Wilkes Our Standards: The Thomson Reuters Trust Principles., opens new tab
[4]
UK exposed to 'serious harm' by failure to tackle AI risks, MPs warn
Government, Bank of England and FCA criticised for taking 'wait-and-see' approach to AI use in financial sector Consumers and the UK financial system are being exposed to "serious harm" by the failure of government and the Bank of England to get a grip on the risks posed by artificial intelligence, an influential parliamentary committee has warned. In a new report, MPs on the Treasury committee criticise ministers and City regulators, including the Financial Conduct Authority (FCA), for taking a "wait-and-see" approach to AI use across the financial sector. That is despite looming concerns over how the burgeoning technology could disadvantage already vulnerable consumers, or even trigger a financial crisis, if AI-led firms end up making similar financial decisions in response to economic shocks. More than 75% of City firms now use AI, with insurers and international banks among the biggest adopters. It is being used to automate administrative tasks or even help with core operations, including processing insurance claims and assessing customers' credit-worthiness. But the UK has failed to develop any specific laws or regulations to govern their use of AI, with the FCA and Bank of England claiming general rules are sufficient to ensure positive outcomes for consumers. That means businesses have to determine how existing guidelines apply to AI, leaving MPs worried this could put consumers and financial stability at risk. "It is the responsibility of the Bank of England, the FCA and the government to ensure the safety mechanisms within the system keeps pace," said Meg Hillier, chair of the Treasury committee. "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying." The report flagged a lack of transparency around how AI could influence financial decisions, potentially affecting vulnerable consumers' access to loans or insurance. It said it was also unclear whether data providers, tech developers or financial firms would be held responsible when things went wrong. MPs said AI also increased the likelihood of fraud, and the dissemination of unregulated and misleading financial advice. In terms of financial stability, MPs found that rising AI use increased firms' cybersecurity risks, and left them overly reliant on a small number of US tech companies, such as Google, for essential services. Its uptake could also amplify "herd behaviour", with businesses making similar financial decisions during economic shocks and "risking a financial crisis". The Treasury committee is now urging regulators to take action, including the launch of new stress tests that would assess the City's readiness for AI-driven market shocks. MPs also want the FCA to publish "practical guidance" by the end of the year, clarifying how consumer protection rules apply to AI use, and who would be held accountable if consumers suffer any harm. "By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm", the report said. The FCA said it had already "undertaken extensive work to ensure firms are able to use AI in a safe and responsible way", but would review the report's findings "carefully". A spokesperson for the Treasury said: "We've been clear that we will strike the right balance between managing the risks posed by AI and unlocking its huge potential." They added that this involved working with regulators to "strengthen our approach as the technology evolves", and appointing new "AI champions" covering financial services "to ensure we seize the opportunities it presents in a safe and responsible way". A spokesperson for the Bank of England said it had "already taken active steps to assess AI-related risks and reinforce the resilience of the financial system, including publishing a detailed risk assessment and highlighting the potential implications of a sharp fall in AI-affected asset prices. We will consider the committee's recommendations carefully and will respond in full in due course."
[5]
UK Parliamentary Panel Flags AI Oversight Gaps Could Expose Financial System to Harm - Decrypt
Observers say regulatory ambiguity risks holding back responsible AI deployment as systems grow harder to oversee. A UK parliamentary committee has warned that the rapid adoption of artificial intelligence across financial services is outpacing regulators' ability to manage risks to consumers and the financial system, raising concerns about accountability, oversight, and reliance on major technology providers. In findings ordered to be printed by the House of Commons earlier this month, the Treasury Committee said UK regulators, including the Financial Conduct Authority, the Bank of England, and HM Treasury, are leaning too heavily on existing rules as AI use spreads across banks, insurers, and payment firms. "By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm," the committee wrote. AI is already embedded in core financial functions, the committee said, while oversight has not kept pace with the scale or opacity of those systems. The findings come as the UK government pushes to expand AI adoption across the economy, with Prime Minister Keir Starmer pledging roughly a year ago to "turbocharge" Britain's future through the technology. While noting that "AI and wider technological developments could bring considerable benefits to consumers," the committee said regulators have failed to provide firms with clear expectations for how existing rules apply in practice. The committee urged the Financial Conduct Authority to publish comprehensive guidance by the end of 2026 on how consumer protection rules apply to AI use and how responsibility should be assigned to senior executives under existing accountability rules when AI systems cause harm. Formal minutes are expected to be released later this week. "To its credit, the UK got out ahead on fintech -- the FCA's sandbox in 2015 was the first of its kind, and 57 countries have copied it since. London remains a powerhouse in fintech despite Brexit," Dermot McGrath, co-founder at Shanghai-based strategy and growth studio ZenGen Labs, told Decrypt. Yet while that approach "worked because regulators could see what firms were doing and step in when needed," artificial intelligence "breaks that model completely," McGrath said. The technology is already widely used across UK finance. Still, many firms lack a clear understanding of the very systems they rely on, McGrath explained. This leaves regulators and companies to infer how long-standing fairness rules apply to opaque, model-driven decisions. McGrath argues the larger concern is that unclear rules may hold back firms trying to deploy AI to an extent where "regulatory ambiguity stifles the firms doing it carefully." AI accountability becomes more complex when models are built by tech firms, adapted by third parties, and used by banks, leaving managers responsible for decisions they may struggle to explain, McGrath explained.
[6]
MPs slam regulators for failing to act on financial AI risks
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The influential Treasury Select Committee of cross-party politicians accuses the Bank of England, the Financial Conduct Authority (FCA) and the Treasury of exposing the public and the financial system to potentially serious harm due to their current wait-and-see positions on the escalating use of artificial intelligence by financial firms. According to evidence received by the Committee, more than 75% of UK financial services firms are now using AI, with the largest take-up among insurers and international banks. AI is being used by businesses in a variety of ways, including to automate administrative functions and to deliver core services such as processing insurance claims and credit assessments. MPs acknowledge that AI and wider technological developments could bring considerable benefits to consumers but emphasise that action is needed to ensure that this is done safely. One recommendation is for the Bank of England and the FCA to conduct AI-specific stress-testing to boost businesses' readiness for any future AI-driven market shock. The Treasury Committee is also recommending that the FCA should publish practical guidance on AI for firms by the end of this year. This should include how consumer protection rules apply to their use of AI as well as setting out a clearer explanation of who in those organisations should be accountable for harm caused through AI. The Committee also takes aim at criticial third party providers, noting that a new regime that was set up to give the FCA and the Bank of England new powers of investigation and enforcement over non-financial firms has yet to take action against any of the major cloud and AI providers. Chair of the Treasury Select Committee, Dame Meg Hillier, says: "The use of AI in the City has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the Government to ensure the safety mechanisms within the system keeps pace. "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk."
[7]
Britain needs 'AI stress tests' for financial services, lawmakers say
In a report on AI in financial services, the Treasury Committee said the Financial Conduct Authority and the Bank of England should start running AI‑specific stress tests to help firms prepare for market shocks triggered by automated systems. Britain's financial watchdogs are not doing enough to stop artificial intelligence from harming consumers or destabilising markets, a cross‑party group of lawmakers said on Tuesday, urging regulators to move away from what it called a "wait and see" approach. In a report on AI in financial services, the Treasury Committee said the Financial Conduct Authority and the Bank of England should start running AI‑specific stress tests to help firms prepare for market shocks triggered by automated systems. The committee also called on the FCA to publish detailed guidance by the end of 2026 on how consumer protection rules apply to AI, and on the extent to which senior managers should be expected to understand the systems they oversee. "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying," committee chair Meg Hillier said in a statement. Technology carries 'significant risks' A race among banks to adopt agentic AI, which unlike generative AI can make decisions and take autonomous action, runs new risks for retail customers, the FCA told Reuters late last year. About three‑quarters of UK financial firms now use AI. Companies are deploying the technology across core functions, from processing insurance claims to performing credit assessments. While the report acknowledged the benefits of AI, it warned the technology also carried "significant risks" including opaque credit decisions, the potential exclusion of vulnerable consumers through algorithmic tailoring, fraud, and the spread of unregulated financial advice through AI chatbots. Experts contributing to the report also highlighted threats to financial stability, pointing to the reliance on a small group of U.S. tech giants for AI and cloud services. Some also noted that AI‑driven trading systems may amplify herding behaviour in markets, risking a financial crisis in a worst-case scenario. An FCA spokesperson said the regulator welcomed the focus on AI and would review the report. The regulator has previously indicated it does not favour AI‑specific rules due to the pace of technological change. The BoE did not respond to a request for comment. Hillier told Reuters that increasingly sophisticated forms of generative AI were influencing financial decisions. "If something has gone wrong in the system, that could have a very big impact on the consumer," she said. Separately, Britain's finance ministry appointed Starling Bank CIO Harriet Rees and Lloyds Banking Group 's Rohit Dhawan as "AI Champions" to help steer AI adoption in financial services.
Share
Share
Copy Link
A UK Treasury Committee report warns that the Bank of England, Financial Conduct Authority, and HM Treasury are exposing consumers and the financial system to serious harm by taking a wait-and-see approach to AI adoption. With over 75% of UK financial firms now using AI, MPs are calling for urgent stress testing to prepare for AI-driven market shocks and clear accountability frameworks.
The UK's financial watchdogs are failing to keep pace with the rapid spread of artificial intelligence across the financial sector, according to a damning UK Treasury Committee report published in January 2026
1
. The cross-party group of lawmakers warned that the Bank of England, Financial Conduct Authority (FCA), and HM Treasury risk exposing consumers and the financial system to "potentially serious harm" by maintaining their wait-and-see approach to financial regulation2
. With more than 75% of UK financial services firms now deploying AI across core functions—from processing insurance claims to performing credit assessments—the committee found a troubling lack of accountability for AI-driven decisions and insufficient understanding of AI risk among financial institutions3
.
Source: Finextra Research
During parliamentary hearings, a critical contradiction emerged regarding responsibility when AI systems cause consumer harm. David Geale, the FCA's Executive Director for Payments and Digital Finance, testified that individuals within financial services firms were "on the hook" for harm caused through AI
1
. However, trade association Innovate Finance revealed that management in financial institutions struggled to assess AI risk due to the "lack of explainability" of AI models—directly conflicting with regulatory requirements for senior managers to demonstrate they understood and controlled risks1
. Treasury Committee chair Meg Hillier emphasized the urgency: "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying"4
. The report demanded clear lines of accountability, citing scenarios where AI systems unfairly deny credit to customers in urgent need, such as for medical treatment, leaving ambiguity about whether developers, deploying institutions, or data providers bear responsibility1
.
Source: Reuters
MPs are demanding that regulators implement AI stress testing to assess the financial sector's readiness for AI-driven market shocks
2
. Industry specialists warned that AI trading amplifies herding behaviour and therefore increases risks to the financial system, while also heightening cybersecurity risks2
. Jonathan Hall, an external member of the Bank of England's Financial Policy Committee, told lawmakers that AI-specific market stress tests would be valuable2
. The committee also highlighted significant systemic risk from financial firms' reliance on a small group of US tech giants for AI and cloud providers, including Amazon Web Services and Google Cloud4
. An AWS outage in October during the inquiry knocked several companies, including Lloyds Banking Group, offline, underlining the threat2
.
Source: The Register
The committee sharply criticized the government for failing to implement the Critical Third Parties regime, which was introduced in January 2025 to give the FCA and Bank of England power to investigate non-financial firms providing critical services to UK financial services, including AI and cloud providers
1
. "Over a year since the regime was established, it is not clear to us why HM Treasury has been so slow to use the new powers at its disposal," the report stated1
. Both Amazon Web Services and Google Cloud told MPs they expect to be brought into the regime, with Google revealing it is already prepared, yet the Treasury has yet to act2
. MPs called for cloud providers to be designated critical industries to improve regulatory oversight2
.Related Stories
The committee called on the FCA to publish guidance on AI safeguards in the financial sector by the end of 2026, including identifying the individuals responsible when opaque credit decisions or other AI failures harm consumers
2
. The report warned that AI carries "significant risks" including the potential exclusion of vulnerable consumers through algorithmic tailoring, fraud, and the spread of unregulated financial advice through AI chatbots3
. Financial services contributed £294 billion to the UK economy in 2023, representing around 13% of gross value added across all economic sectors1
. However, successive governments have adopted a light-touch approach to AI regulation for fear of discouraging investment1
.In response to the mounting pressure, the Treasury announced two new AI Champions to spearhead the safe roll-out of AI in financial services: Harriet Rees from Starling Bank and Rohit Dhawan from Lloyds Banking Group
2
. They will help firms seize opportunities "while protecting consumers and financial stability" and report to Economic Secretary to the Treasury Lucy Rigby2
. A Treasury spokesman said the government wants to "strike the right balance between managing the risks posed by AI and unlocking its huge potential"2
. Both the Bank of England and FCA said they welcomed the report and would consider the recommendations "carefully"2
. Industry observers note that regulatory ambiguity risks holding back responsible AI deployment as systems grow harder to oversee5
. Dermot McGrath, co-founder at ZenGen Labs, told Decrypt that while the UK's fintech sandbox approach "worked because regulators could see what firms were doing," artificial intelligence "breaks that model completely" due to the opacity of model-driven decisions5
.Summarized by
Navi
[1]
1
Policy and Regulation

2
Technology

3
Technology
