Curated by THEOUTPOST
On Wed, 27 Nov, 12:05 AM UTC
4 Sources
[1]
NextGen: AI: AI's impact on regulation, operational resilience and customer experience
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The session was moderated by Finextra's Gary Wright and featured speakers Jonathan Ede, director of data technology at CACI; Aman Luther, AI lead at AFME; and Stathis Onasoglou, CFA - EMEA FSI principal at Google Cloud. Ede started out by summarising the current biggest challenge when it comes to AI adoption. "The best use and adoption of AI is to ensure that it permeates throughout the entire organisation. The real value from AI is to lift it from where we currently see it - in these point-based solutions and siloed implementations of AI - and spread across the entire organisation. Right now we're significantly limited by integrations. And so AI systems are constrained because they are deployed in single applications, single systems. To really get good value, these AI systems need to be joined together. That also means we need to have better integration of data, aner better availability of data." Luther expanded that banks historically have issues with any kind of technology changes considering the amount of technical debt and legacy infrastructure in financial services, and AI is no different. He continued to explain that, at AFME, they are however seeing a cultural shift. "A lot of banks are very aware of that risk, and they've started to hire the right skill set and change their model slightly. For example, we work very much on the risk and governance side, helping banks assess any new proposal and determining whether they should go ahead with it or not. And just that process itself, and banks are coming to us and saying: 'Hey guys, can you help us with this? How can we look at this?' Because AI is different to their normal tech appraisal process, we need a whole new process for it." Ede continued that AI is different from other technical updates, and goes beyond just looking at ROI. "What we're seeing here is that the strategy around AI needs to be elevated above the CTO, above the CIO. It sits very much with the CEO because this is not about technology anymore. It's not about data, not about insight. It's about, fundamentally, how organisations act, how they how they go about their business. And I think ultimately, it's really quite difficult for anybody to really grasp." Onasoglou expanded on Ede's commentary. "The term ROI was mentioned, and return on investment is not always financial. So usually some kind of value can be articulated, either top-line revenues or cost cutting, cost mitigation, or even with more abstract terms, with innovation, for example, customer experience. But ultimately, all these things might lead to financial benefits." The conversation then turned toward governance and regulation. Luther started by saying that, while people might argue against it, regulation does slow down innovation. He cited the UK, US and EU as examples of regions going about AI in different ways. Frameworks like the EU AI Act will put additional burdens on organisations, especially smaller companies. And while it's necessary to draw boundaries, the question becomes a societal one: Where do we draw the line? Ede concurred: "Does regulation ever promote innovation? I'm not necessarily sure it does. I think the regulation that has been proposed so far is pragmatic, and I don't think it stifles innovation. AI is so fast moving, we need to have guardrails, but those regulations also need to be fast moving. I think you do find that regulations are holding back the growth of AI technologies within the UK. We need to look at that, because we aren't going to see those same constraints in other nations, such as China. And so we need to make sure where those regulations are adapting to ensure they aren't stifling future innovation." Onasoglou then referred to research Google recently conducted among 340 senior decision makers across retail and commercial banking. When talking about AI blockers and impediments, they found that the "top two factors were lack of clean, analysis-ready data - organisations are data rich but insights poor - and a lack of regulatory insights. Not necessarily restrictive regulation, but a lack of clarity in the landscape." Lastly, the panel turned towards balancing AI's impact on the customer with an organisation's operational resilience. Onasoglou explained that as a 'recovering consultant' he likes to look at things in an action priority matrix, and that he found that operational efficiency and customer experience are not competing priorities. Luther explained that, coming from the wholesale banking side, they are seeing fewer customer facing use cases. When it comes to operational efficiency, "we've seen a huge push and a huge gain. Some of our members are using AI for things like detecting and predicting failed transactions, for example. We've seen firms deploying AI there to predict which transaction is likely to fail and which one isn't, in order to deal with it before fails in the first place. And that creates a huge bottom line benefit, both for their firm as well as firms down the road." Ede explained that, working almost exclusively with retail banks, he's seen lots of gains in the customer experience area. "Always start with the customer experience that leads all the way to personalising experiences. And what we find, from the front-end systems, is that the customer experience is being dramatically changed and improved with the use of AI. But what we're not seeing just yet is that going one step back again from that. So taking those improvements to customer experience, and making sure that goes all the way back into how the organisation speaks itself internally, and how to organise themselves to ensure they are more joined up. Organisations are not yet treating themselves how they treat their customers." Wright concluded the session: "There's a lot of strategies that need to change - modernisation strategies or transformation strategies that respond to regulation, to competition, and to customer needs - and AI is a considerable part of that."
[2]
NextGen: AI: How the intelligence revolution is driving what AI can do for banking
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. Dr Lyske highlighted the transformative potential of AI in banking, pointing to how his company uses AI technology to stress test various scenarios and come up with detailed solutions. There are many positives to the influx of AI integration in every field, but there are also ethical problems to be considered, he noted, indicating the use of AI to replicate human behaviour, potentially bypassing human input. "Our next revolution is here. I know it's been spoken about today already, but it's the intelligence revolution, data converted to information, converted to knowledge and processed with wisdom to predict an outcome, to help us reduce risk and increase certainty. After all, prediction of the future is what intelligence is all about. Now, through this lens, how does it change the perception of what AI can do for us?" On to risk management potential, Dr Lyske stated that AI has the ability to price and manage risk which could shape compliance and product creation; for example through tokenising routine behaviours, and monitoring crypto wallets and social media for accurate KYC and AML practices. Dr Lyske then discussed the possibilities of Web3.0, which redefines how people interact with each other, their data, and institutions. He stated that with the rise of social media, data is owned by the platform and permissions are extended to the individual. Banking is built on trust, but "individuals are tired of this notion of trust and permission, and have consequently embraced technologies that allow them to do what they want to do in a trust-less and mission-less way. This is the cornerstone of Web3.0 as a social, political and economic endeavour." He continued that cryptocurrencies have satisfied this need. Dr Lyske stated that through the development of AI, there is a future where AI can ratify global identities and facilitate seamless transactions across borders. He explained how banks can act as 'blockchain oracles' to connect blockchains from external systems based on inputs from the real world. Banks can give blockchain permissions and ratify customer identities and legitimise transactions; banks hold the key for verification and in monitoring through use of AI. The first panel session of the event, 'Where next with AI?' was moderated by Daniel Szmukler, director of the Euro Banking Association (EBA), with panellists Dr Jochen Papenbrock, head of financial technology EMEA at NVIDIA, and Jeff Tijssen, global head of fintech at Bain & Company. Szmukler stated that the EBA believes that AI is going to be transformational technology in the incoming years because of its "human touch", he furthered. "AI immediately touches you, because it has very real applications that everyone, intergenerationally speaking, can make use of. That is something that is quite unique about this technology; it's very tangible in the right use cases and applications, which are manifold." When asked about AI unpredictability and challenges with AI adoption, Tijssen emphatically stated: "The challenge with financial services, from a regulatory perspective particularly, is that 97% or even 99% accuracy isn't enough, and therefore you need to work towards 100% accuracy." He outlined how a big challenge lies in explainability, and that leads to complex discussions in how to ensure that is offered to customers, users, and stakeholders consistently. Dr Papenbrock agreed with Tijssen's points, and stated that another sizable obstacle to adoption is the lack of focus on technology: "AI needs a lot of technology to be able to be successful. At Nvidia, we think about this in terms of the AI factory. So a bank or financial organisation is an institution that produces financial intelligence, made by humans, served by humans, and AI needs to be in the loop wherever it can be. This the 'AI Centre of Excellence'. It's based on an AI factory that serves all these needs, because running huge models, LLMS and so forth, is very expensive if you don't do it in a proper way." He detailed that the 'AI factory' is where AI processes can be customised, activated, and orchestrated. Tijssen then referenced the lack of the a regulatory framework is also a major concern, and states that banks need to gain new skill sets to effectively reach the potential of AI. Tijssen highlighted that a mindset shift is essential to drive adoption. He stated that organisations need to be committed to driving growth in every aspect of their business and looking for new opportunities, products, and initiatives. Szmukler said that banks are focusing on technology elements over cultural needs, which can have a negative impact on their reputation and customer relationships. He explained: "I think it's fair to say that banks by culture are quite conservative, because the single biggest asset you put forward as a bank is the trust that your customers have in you. If you cannot really audit how AI has come to certain conclusions, if there's this 'black box' phenomena, if it's potentially unethical or not fair in its outcomes, this would cause huge, tremendous reputational damage to a bank, and it would deprive the bank to build on its biggest single asset, which is trust. It's important that we look at all the vectors that play; technology, data, and they all come back to culture." When discussing solutions for AI adoption, Dr Papenbrock said that there needs to be an interoperable platform to process data at a scale, and that accelerated computing programs can improve data curation and processing. He added that it is important to have a space to test and fine-tune AI models to ensure they are safe and accessible. Concluding the panel, the speakers emphasised the need of explainable AI to prevent bias and the significance of education and training for AI adoption. Tijssen noted that while education and understanding the theoretical knowledge behind AI is essential, there needs to be people on the executive level pushing for AI adoption and improvement for it to be implemented. Dr Papenbrock stated that the Deep Learning Institute offers free courses, and that Nvidia is looking to democratise AI access through educational programmes.
[3]
NextGen: AI: Busting five AI myths
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The goal of the session was to bust five AI myths, with the help of audience participation. The panel facilitated this with interactive polls and commentary. Myth 1: Human-in-the-loop will not be a vital role in financial services The first myth that was addressed was that the human-in-the-loop (HITL) will not be a vital role in the future. 84% of the audience disagreed, and so did the panellists. "We're going to need human-in-the-loop going forward," Bastiman commented. "We're seeing regions worldwide starting to introduce regulations in financial services. Human-in-the-loop is part of that, and having human decision making is huge. I really liked what Jochen said in his session earlier: rather than being 'human-in-the-loop', it should be 'AI in the human process'". McDowell added an example from his time as a trader: "When algorithm trading and machine learning came in, people thought it would be the end of the human trader - and it wasn't. You still need humans accountable to the regulator and to the clients. It's about augmenting the job, not losing the job." Tracy added that low risk and no-impact automation will take humans out of the loop to some extent, but understanding context and nuance is crucial. An audience member agreed. When Bell-Hosking asked whether one of the audience members who voted 'agree' would like to comment, a delegate from Swift commented that we will need the human less and less as we start trusting AI more, and become more aware of how its processes work. Myth 2: Soft skills will become unimportant in financial services The soft skills in question include critical thinking, attention to detail, interpersonal skills, negotiation, empathy, critical skills, and collaboration. Bastiman disagreed with the statement and started with an example: "Particularly in the financial crime space, the transaction patterns of someone working multiple zero-hour contracts, where money-in is quickly followed by money-out - to having to literally put cash in a jar to put money on their gas card because they're not even able to get direct debits - that profile overlaps with an awful lot of the rules that we see for money laundering activity. In some respects, critical thinking is more important than mathematics. Being able to look at information that you're presented with and make critical assessments of it is one of the most important skills of this century." McDowell expanded on the importance of soft skills inside a company: "Everyone wants to work for a good manager. No one wants to get their end-of-year review from a chatbot that's read your emails all year. The more senior you get, the more EQ you want rather than IQ, because that's what brings people together." An audience member from Santander agreed, stating: "AI decisions need to go through a human lens. Those skills will become more important as we train AI. So it can lead us to decisions that are more human-led." Another audience member from S&P Global added: "And on top of that, what's important is to develop inquisitiveness, asking critical, good questions. AI are getting better and better, so it's very important to know exactly what to ask and how to ask it, in order to be able to get the answers that are going to be valid for us in the future. That is part of the analytical mindset that I think is going to be important." Tracy saw both sides, stating there was a kernel of truth in the statement: "Soft skills will remain important, but what those skills are might change." Myth 3: AI isn't human and therefore doesn't have bias. The third myth to be busted was that AI does not have bias, which was unanimously disagreed with by the panel. McDowell stated that, depending on the training data, AI can have a significant bias. Bastiman explained there are mathematical proofs that show that, if AI models are trained with unbalanced data sets, the model itself will be biased one way or another. Tracy commented: "Again, I think there is a kernel of truth in it. The bias I worry about is the human bias. Maybe I'm too pessimistic about human nature, but how diligent are people going to be when performing those checks? How gullible are we? I think the answer is, quite gullible, and lazy, right? Sometimes AI holds up quite a depressing mirror to us as society, and the biases it tends to inherit from us." Myth 4: Sensitive data is safer with advancing technology than with humans. The next myth on the menu had both audience and panellists split. They argued that, while in theory, technology has more potential for safekeeping, people tend to find ways to subvert it. "Think strong passwords," Bastiman stated. "We all know people tend to write down their complex passwords and leave it on a post-it; or use 123456 as a phone pin. At some point, everything we try to do to protect data, we end up with people trying to subvert it. We already know that with the advent of quantum computing, encrypted data is being harvested off the web from when they can crack it. We'll come up against that challenge soon." Myth 5: Regulation can't keep up with AI and enforce the role of the human. The last myth to be busted was that regulation cannot keep up with AI and enforce the role of the human. 87% of the audience agreed with the statement, to which Tracy simply commented: "That's quite a depressing take, and I hope it's not true." McDowell commented: "Regulation changes in financial services. Lots of regulators are looking at it and the complexity it comes with; and different regions are going to be doing different things. The UK is principle-based - so whether it's Mrs Jones answering the phone and giving you a mortgage quote, or whether it's a generative AI producing it, you'll have to abide by the same principles." Bastiman concluded: "There's times when it isn't keeping up, but it can keep up. They just need to make sure that regulation avoids being too technically specific, because we're seeing the sizes and models are changing. So if you put down regulation talking about parameters or size of the machine, it will get out of date very quickly."
[4]
NextGen: AI: Finding the right strategies to overcome AI limitations
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The panel titled 'What are the solutions to our limitations', was moderated by Finextra's Gary Wright. Speakers consisted of James Benford, executive director & chief data officer, Bank of England; Kshitija Joshi, Ph.D, vice president (data & AI solutions), chief data office, Nomura International; Kerstin Mathias, policy and innovation director, City of London; and Ed Towers, head of advanced analytics & data science units, Financial Conduct Authority. The discussion started with what each panellist is currently working on in their organisations when it comes to AI. Towers spoke about the survey recently released by the FCA, stating they found that currently 75% of financial organisations are already using some form of AI, and 17% are already using some for of generative AI. The majority of use cases they identified were in lower materiality areas, with a lot of adoption observed in financial crime prevention and the back office. Benford explained that, at the Bank of England, they've had an advanced analytics division for around 10 years, which has working on about 100 projects with what we consider traditional AI. Specific examples included machine learning for policy setting or investigating the impact of unemployment on inflation. He continued that an AI task force has been set up last year and have started rolling out generative AI more broadly, particularly to transform legacy code. Joshi explained that at she was the first person hired when Nomura International set up their centralised data science team three years ago. The idea was to ensure a centralised team existed that could oversee data governance and management principles. "All AI rests in the assumption that the underlying data is of good quality. In reality, that's not really true, and we all understand that in financial services." So the question was, how to you test for toxicity, bias, and hallucinations - and do so at scale? From Joshi's point of view, there are two different periods she observed when it comes to AI adoption: Before the rollout of ChatGPT and after the rollout of ChatGPT. "Before, if you went to stakeholders - not even about AI, but about deploying data analytics - it was a big no no. Ater ChatGPT, everyone wanted to use it, and do something - anything - with AI. The conversation then turned towards education, as Joshi further explained how their team ensured the appropriate training and education was in place at Nomura International. Mathias emphasised that training was also a crucial priority at the City of London. "Our main objective is that London remains a leading financial centre," she explained. "And AI helps with that. "We look at it in three buckets. One is internal policies, two is investment, and three is skills. There has been a 150 fold increase in job ads that look for generative AI and conversational AI skills in the past 24 months. There is no way that those skills can be plugged just by waiting for people to come through the pipeline. So upskilling and re-skilling the existing workers is crucial - and your data systems, and your legacy system, are a part of that." When it comes to addressing risk in AI models, Benford emphasised the need to build solid model and risk frameworks. "We've gone down the path of looking at all internal policies, and how to allocate resources and focus on the low hanging fruit," he explained. "Traceability to the source document is important guardrail. It's not just the model, it's context. It's all the data you're using to build your model on. The context is changing as the organisations' knowledge base evolves. You cannot predict how a model will respond in 6 months' time. Stress testing is crucial here." Lastly, Towers stressed the importance of collaboration with regulators, and how the FCA helps address this. "We published an AI updated last year in response to a request from the government, which lays out how our current policies, like Consumer Duty, apply to AI. But it's really the time for engagement and collaboration between industry and regulators."
Share
Share
Copy Link
A comprehensive look at how artificial intelligence is reshaping the banking industry, focusing on regulatory challenges, operational improvements, and the balance between innovation and customer trust.
The banking industry is experiencing a significant transformation with the integration of artificial intelligence (AI) across various operations. Industry experts highlight the potential of AI to revolutionize banking practices, from risk management to customer experience. However, this adoption comes with its own set of challenges, particularly in areas of regulation, data management, and organizational culture 12.
Jonathan Ede, director of data technology at CACI, emphasizes that the true value of AI lies in its permeation throughout entire organizations. Currently, AI systems are often constrained to siloed implementations, limiting their potential. To maximize AI's impact, better integration of data and systems is crucial 1.
Aman Luther from AFME notes a cultural shift in banks' approach to AI. Financial institutions are increasingly aware of the need for new skill sets and are adapting their models to accommodate AI integration. This includes developing new processes for assessing AI proposals, recognizing that AI requires a different approach compared to traditional tech appraisals 1.
The panel discussions revealed mixed views on the impact of regulation on AI innovation. While some argue that regulation slows down innovation, others believe that the current regulatory proposals are pragmatic and necessary. The challenge lies in creating regulations that provide guardrails without stifling future innovations, especially considering the rapid pace of AI development 12.
A recent study by Google among banking decision-makers identified two primary blockers for AI adoption: lack of clean, analysis-ready data and lack of regulatory insights. This highlights the need for improved data management practices and clearer regulatory guidelines in the AI space 1.
AI is driving significant improvements in both operational efficiency and customer experience. In wholesale banking, AI is being used to predict and prevent failed transactions, creating substantial bottom-line benefits. In retail banking, AI is enhancing customer experiences through personalization and improved front-end systems 1.
Despite AI's growing capabilities, the importance of human involvement and soft skills remains crucial. The concept of "human-in-the-loop" is seen as vital, particularly in areas requiring context understanding, critical thinking, and regulatory compliance. Soft skills like empathy, collaboration, and critical thinking are expected to become even more important as AI integration progresses 3.
There's a unanimous agreement that AI can inherit biases from its training data, emphasizing the need for careful data curation and model testing. The question of data security with advancing AI technology remains debated, with experts highlighting the importance of robust safeguarding measures 3.
Regulators are actively working to keep pace with AI advancements. The Financial Conduct Authority (FCA) found that 75% of financial organizations are already using some form of AI, with 17% utilizing generative AI. Regulators are emphasizing the need for collaboration between industry and regulatory bodies to develop appropriate frameworks for AI governance 4.
As AI continues to reshape the banking landscape, the focus remains on balancing innovation with trust and regulatory compliance. The industry is moving towards more comprehensive AI strategies, improved data management, and enhanced collaboration between technology teams and business units. The future of AI in banking will likely see a continued emphasis on ethical AI use, regulatory alignment, and the development of AI-ready workforces 1234.
Reference
[1]
[2]
[3]
[4]
An in-depth look at the current state of AI in the financial sector, exploring challenges in adoption, the evolving roles of traditional banks, fintechs, and big tech companies, and the potential future landscape of AI-driven financial services.
2 Sources
2 Sources
Banks are increasingly adopting generative AI, but strategies for implementation and expected outcomes vary globally. While some focus on productivity gains, others prioritize cost reduction, highlighting the complex landscape of AI integration in finance.
5 Sources
5 Sources
AI technology is revolutionizing the banking industry and financial oversight. From enhancing customer experiences to improving risk management, AI is reshaping how financial institutions operate and are regulated.
2 Sources
2 Sources
AI is transforming the fintech industry, particularly in credit risk assessment and lending practices. This shift is driven by economic changes, regulatory updates, and technological advancements, promising more inclusive and efficient financial services by 2025.
2 Sources
2 Sources
Artificial Intelligence is reshaping the banking and financial services sector, offering new opportunities for growth and efficiency while also presenting emerging risks. This story explores the impact of AI in ASEAN markets and beyond, highlighting both the potential benefits and challenges.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved