Curated by THEOUTPOST
On Thu, 13 Mar, 12:02 AM UTC
3 Sources
[1]
Wall Street Flags New AI Risks From Hallucinations, Criminal Use
Goldman Sachs Group Inc., Citigroup Inc., JPMorgan Chase & Co. and other Wall Street firms are warning investors about new risks from the increasing use of artificial intelligence, including software hallucinations, employee-morale issues, use by cybercriminals and the impact of changing laws globally. The dangers newly flagged in the bank's annual reports include flawed or unreliable AI models, increased competition and new regulations restricting use of AI. JPMorgan, for example, said AI could cause "workforce displacement" that might affect staff morale and retention, and increase competition for hiring employees with the necessary technological skills, according to the firm's 2024 10-K.
[2]
Banking Giants Warn AI Can Bolster Cybercrime and Lower Morale | PYMNTS.com
Wall Street banking giants have reportedly begun warning investors about risks stemming from AI use. As Bloomberg News reported Wednesday (March 12), those risks include so-called artificial intelligence (AI) "hallucinations," use of the technology by cybercriminals and its effect on employee morale. For example, the report said, JPMorgan said in a recent regulatory filing that AI could bring about "workforce displacement" that could affect worker morale and retention, while increasing competition for employees with the appropriate tech background. Bloomberg notes that while banks have in recent years been pointing to AI risks in their annual reports, new concerns are emerging as the financial world embraces the technology. It's a balancing act: keeping on top of the latest AI advancements to retain customers, while also dealing with the threat of cybercrime. "Having those right governing mechanisms in place to ensure that AI is being deployed in a way that's safe, fair and secure -- that simply cannot be overlooked," Ben Shorten, finance, risk and compliance lead for banking and capital markets in North America at Accenture, said in an interview. "This is not a plug-and-play technology." The Bloomberg report adds that banks are at risk of using technologies that may be built on outdated, biased or inaccurate financial data sets. Citigroup said that as it introduces generative AI at its company, it faces risks of analysts working with "ineffective, inadequate or faulty" results produced. This data could also be incomplete, biased or inaccurate, which "could negatively impact its reputation, customers, clients, businesses or results of operations and financial condition," the bank said in its 2024 annual report. PYMNTS wrote recently about the use of AI in cybercrime, arguing that it helped add to a larger landscape of cyberattacks in 2024 that included ransomware, zero-day exploits and supply chain attacks. "It is essentially an adversarial game; criminals are out to make money and the [business] community needs to curtail that activity. What's different now is that both sides are armed with some really impressive technology," Michael Shearer, chief solutions officer at Hawk, said in an interview with PYMNTS. And last month, PYMNTS examined efforts by Amazon Web Services (AWS) to combat AI hallucinations using automated reasoning -- a method rooted in centuries-old principles of logic. The technique is a major leap in making AI outputs more reliable, which is particularly valuable for heavily regulated industries such finance and health care, AWS Director of Product Management Mike Miller said in an interview.
[3]
Wall Street warns of new AI risks from hallucinations, criminal use
Goldman Sachs, Citigroup, JPMorgan Chase and other Wall Street firms are warning investors about new risks from the increasing use of artificial intelligence, including software hallucinations, employee-morale issues, use by cybercriminals and the impact of changing laws globally. The dangers newly flagged in the bank's annual reports include flawed or unreliable AI models, increased competition and new regulations restricting use of AI. JPMorgan, for example, said AI could cause "workforce displacement" that might affect staff morale and retention, and increase competition for hiring employees with the necessary technological skills, according to the firm's 2024 10-K. Banks have been acknowledging AI-related risks in their annual reports for the past couple of years, but new concerns are cropping up as the financial sector increasingly embraces AI via their own software or third-party offerings. If banks don't keep up to date with the latest AI developments, they risk losing customers and business, they said in their annual reports. But increased AI use also opens them up to risks from cyberattacks and misuse. "Having those right governing mechanisms in place to ensure that AI is being deployed in a way that's safe, fair and secure -- that simply cannot be overlooked," Ben Shorten, Accenture's lead for finance, risk and compliance for banking and capital markets in North America, said in an interview. "This is not a plug-and-play technology." Banks are at risk of piloting technologies that may be built using outdated, biased or inaccurate financial data sets. JPMorgan's annual report said there are dangers around developing and maintaining models that have the highest level of "data quality." Citigroup said that as it rolls out generative AI at select parts of the bank, there are risks of "ineffective, inadequate or faulty" results produced for its analysts. The data could also be incomplete, biased or inaccurate, which "could negatively impact its reputation, customers, clients, businesses or results of operations and financial condition," according to its 2024 annual report. Integrating AI Goldman Sachs said that while it's increased its investment in digital assets, blockchain and AI, growing competition poses risks to integrating AI technologies in a timely enough manner to boost productivity, reduce costs and give clients better transactions, products and services, according to the firm's latest annual report. That could affect customer attraction and retention, Goldman said. Financial firms also run the risk of maintaining data privacy and regulatory compliance in an environment that is "less certain and rapidly evolving," Shorten said. In 2024, the EU Artificial Intelligence Act went into effect, establishing new rules on the use of AI systems in the region, where many U.S. banks have operations. "This act establishes rules for placing on the market, putting into service and using a lot of artificial intelligence systems in the EU," Shorten said. "The outlook for the U.S. and the U.S. market is less clear." Banks are using a combination of their own AI tools and ones acquired from outside providers. Citigroup is rolling out a suite of tools that can synthesize key information from public filings. AI @ Morgan Stanley Debrief is taking on rote tasks with a ChatGPT-like interface. And Goldman's private-wealth division is using AI to evaluate portfolios and analyze dozens of underlying positions, said Chief Information Officer Marco Argenti. "It's so important to take a responsible approach and really be applying controls so that you protect yourself from potential inaccuracies and hallucinations," he said last week at the Bloomberg Invest conference in New York. JPMorgan CEO Jamie Dimon said AI may be the biggest issue his bank is grappling with. In his annual shareholder letter, he likened AI's potential impact to that of the steam engine and said the technology could "augment virtually every job." Representatives for the banks declined to comment beyond the AI disclosures in their annual reports. As banks increasingly turn to AI, cybercriminals are doing the same -- and are becoming increasingly sophisticated in its use, according to Shorten. Accenture's most recent global survey of 600 cybersecurity executives in the banking industry found their teams are struggling to keep up with their organizations' AI adoption efforts. Among respondents, 80% believe generative AI is empowering criminals faster than banks can respond. Morgan Stanley said in its latest annual report that generative AI, remote work and integrating third-party technology could pose a risk to data privacy. The risks introduced by using AI while working from home will require firms to set up rules to avoid problems, Shorten said. "These steps are only going to increase in criticality," he said, "as attackers are being enabled by this technology faster than the banks are able to respond."
Share
Share
Copy Link
Major financial institutions like Goldman Sachs, Citigroup, and JPMorgan Chase are alerting investors to emerging risks associated with the widespread adoption of artificial intelligence, including AI hallucinations, cybercriminal exploitation, and potential impacts on workforce dynamics.
Major Wall Street firms, including Goldman Sachs, Citigroup, and JPMorgan Chase, are sounding the alarm on new risks associated with the increasing use of artificial intelligence (AI) in the financial sector. These concerns, highlighted in their annual reports, encompass a range of issues from AI hallucinations to cybercrime and workforce challenges 12.
One of the primary concerns raised by banks is the risk of AI hallucinations – instances where AI systems produce inaccurate or unreliable outputs. Citigroup, for example, warned about the potential for "ineffective, inadequate or faulty" results from generative AI systems being rolled out in select parts of the bank 3. The quality and accuracy of data used to train AI models is a critical issue, with JPMorgan emphasizing the importance of maintaining the highest level of "data quality" in their AI models 3.
The adoption of AI technologies has also opened up new avenues for cybercriminals. Banks are increasingly concerned about sophisticated AI-powered cyberattacks. According to a survey by Accenture, 80% of cybersecurity executives in the banking industry believe that generative AI is empowering criminals faster than banks can respond 3. This highlights the urgent need for robust cybersecurity measures to keep pace with AI advancements.
JPMorgan's annual report pointed out that AI could lead to "workforce displacement," potentially affecting staff morale and retention 1. This shift in the employment landscape could also intensify competition for employees with specialized technological skills 2. The integration of AI into various job functions is forcing banks to reconsider their hiring strategies and employee development programs.
The rapidly evolving regulatory environment surrounding AI presents another significant challenge for financial institutions. The implementation of the EU Artificial Intelligence Act in 2024 has established new rules for AI systems in the region, affecting many U.S. banks with European operations 3. However, the regulatory outlook in the U.S. remains less clear, adding to the complexity of compliance efforts.
While AI offers numerous benefits, it also introduces competitive pressures. Goldman Sachs noted that failing to integrate AI technologies in a timely manner could affect their ability to attract and retain customers 3. Banks are now in a race to leverage AI for improved productivity, cost reduction, and enhanced customer services.
Financial institutions are walking a tightrope between embracing AI innovations and managing associated risks. As Marco Argenti, Chief Information Officer at Goldman Sachs, stated, "It's so important to take a responsible approach and really be applying controls so that you protect yourself from potential inaccuracies and hallucinations" 3.
To address these challenges, banks are implementing governance mechanisms and controls to ensure safe, fair, and secure deployment of AI. As Ben Shorten from Accenture emphasized, "This is not a plug-and-play technology" 2. The financial sector's approach to AI will need to evolve continuously, balancing innovation with robust risk management strategies.
1: https://www.bloomberg.com/news/articles/2025-03-12/wall-street-flags-new-ai-risks-from-hallucinations-criminal-use 2: https://www.pymnts.com/artificial-intelligence-2/2025/banking-giants-warn-ai-can-bolster-cybercrime-and-lower-morale/ 3: https://www.seattletimes.com/business/wall-street-warns-of-new-ai-risks-from-hallucinations-criminal-use/
Reference
[1]
[3]
A new UK study reveals that AI-generated fake news spread on social media could significantly increase the risk of bank runs, prompting calls for improved monitoring and preparedness in the financial sector.
2 Sources
2 Sources
Major banks and private finance groups are competing for a slice of the estimated $1 trillion AI infrastructure market. Morgan Stanley proposes collaboration between traditional banking and private capital to meet the massive funding requirements for AI's future.
2 Sources
2 Sources
Artificial Intelligence is reshaping the banking and financial services sector, offering new opportunities for growth and efficiency while also presenting emerging risks. This story explores the impact of AI in ASEAN markets and beyond, highlighting both the potential benefits and challenges.
2 Sources
2 Sources
A growing number of Fortune 500 companies are acknowledging the potential risks associated with artificial intelligence in their annual reports. This trend highlights the increasing importance and impact of AI technologies in the corporate world.
2 Sources
2 Sources
Jim Covello, a veteran analyst at Goldman Sachs, raises concerns about the sustainability of the AI boom. He warns that the current AI hype might be leading to a market bubble, drawing parallels with past tech bubbles.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved