3 Sources
3 Sources
[1]
Agents in the Loop: Rethinking Risk, Compliance and Governance with AI: By David Weinstein
This article is the third in my four-part series exploring how intelligent agents are reshaping the foundations of fintech operations. In Part 3, we turn our attention to risk, compliance and governance. As AI agents take on more responsibility - acting, coordinating and learning across workflows - they also introduce new layers of complexity. This piece outlines how fintechs can govern agent behaviour without slowing innovation. From embedded audit trails to oversight agents like the "Judge Agent," we explore what AI-native compliance looks like and how to design for it from day one. Speed Without Supervision Is a Risk Multiplier The most advanced AI agents don't just answer questions - they write emails, submit reports, trigger workflows, update databases and coordinate across departments. They're not just handling information; they're making decisions and taking actions. But these decisions are often made in milliseconds, based on probabilistic reasoning, shifting context and limited visibility. If an agent misclassifies a regulatory obligation, misroutes a flagged transaction, or skips escalation due to overconfidence, who takes responsibility? Traditional governance models weren't built for this. They rely on deterministic systems and human checkpoints. But autonomous agents operate independently and at speed, making post-hoc review ineffective. What's needed is a new kind of oversight that keeps pace with the systems it governs. Enter the Judge Agent One promising approach is the introduction of oversight agents that actively participate in decision-making. A Judge Agent evaluates behaviour, monitors risk thresholds, enforces escalation protocols and ensures decisions stay within acceptable bounds. Rather than replacing compliance teams, the Judge Agent enhances their capacity by providing consistent, real-time oversight at scale. As a programmable layer of operational judgment, it helps ensure decisions stay aligned with policy, risk thresholds, and regulatory expectations. For example: Oversight is no longer a matter of spot-checking results. It becomes a continuous, embedded process. Governance by Design Modern governance can't rely on documentation alone. Policies in PDFs and guidelines buried in onboarding decks may tick regulatory boxes, but they don't scale with the speed or complexity of autonomous systems. As AI agents begin to operate across workflows, make decisions in real time, and adapt through feedback, governance must evolve from static rules to dynamic enforcement. That means designing systems where every action is traceable -- capturing the inputs used, the confidence levels assigned, and the reasoning behind each decision. It means surfacing decision logic in ways that reveal not just what happened, but why -- including which data was used, which constraints applied, and which alternatives were considered. And it means establishing guardrails that actively shape behaviour, detect anomalies, and mitigate risk in ambiguous or high-stakes situations. With this approach, compliance shifts from reactive oversight to proactive infrastructure. Governance isn't an add-on -- it becomes part of the system's core architecture, embedded in how decisions are made, risks are managed, and accountability is enforced across the organisation. A Shift in Regulatory Expectations Regulators are no longer satisfied with documented intent. They are beginning to require real-time explainability and outcome traceability. That means fintechs must be able to show not only what happened, but why. It also means that systems need to answer questions like: This level of scrutiny is fast becoming a baseline expectation. Retrofitting governance is costly and complex. Building it in from the start is faster - and far more sustainable. From Human Oversight to Agent Collaboration We're moving beyond "human-in-the-loop." In modern systems, agents increasingly monitor, supervise and even audit one another. This doesn't remove people from the process. It elevates their role. Humans set strategy and guardrails. Agents carry out execution, coordinate feedback and ensure compliance is respected in real time. Governance becomes a distributed capability across a network of agents. Some act. Others review. Some flag anomalies. Others log decisions and prepare reports for human review. The result is not just automation. It is an ecosystem of intelligent oversight. What Fintech Teams Can Do Today For teams building or scaling agent infrastructure, here are five concrete steps to prepare for AI-native governance: Conclusion: Rethinking Governance - From Oversight to Strategic Advantage AI agents bring power, speed and scale. But they also raise the stakes. The solution isn't to slow them down. It's to build oversight that moves just as fast - and thinks just as clearly. Fintechs that succeed in the coming wave won't just automate. They'll govern with intent, design for transparency and adapt faster because their systems are built to explain themselves. In this new operating model, governance is not a drag on speed. It's what makes speed sustainable - and trust scalable.
[2]
Growth of AI Agents Put Corporate Controls to the Test | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The technology sector's famous phrase, "Move fast and break things," just got a shot in the arm with the advent of autonomous artificial intelligence agents. "This isn't a technical upgrade; it's a governance revolution," Trustly Chief Legal and Compliance Officer Kathryn McCall told PYMNTS during a conversation for the June edition of the "What's Next in Payments" series, "What's Next in Payments: Secret Agent." AI agents are now not only recommending products but executing financial transactions, and the payments industry is racing to integrate these software agents capable of acting autonomously on a user's behalf. While the benefits could be immense, the risks are even starker. McCall is one of a growing number of voices urging restraint, forethought and rigorous systems thinking. "You're messing with people's money here," she said. "This is a lot different from using an AI agent to plan your vacation in Paris." AI agents have evolved from intelligent helpers to critical decision makers inside consumer banking flows. They aren't just suggesting purchases -- they're booking flights, sending payments and signing documents. But as McCall pointed out, this leap has implications for privacy, security and legality. The nature of these agents introduces complexities beyond that of traditional software-as-a-service (SaaS) platforms. Where legacy systems typically had deterministic behaviors with known attack surfaces, agentic AI is non-deterministic by design. That means unpredictable outputs, evolving capabilities and new security considerations such as prompt injection, adversarial attacks and data leakage risks, just to name a few. "[AI agents] can take actions on behalf of users or systems," McCall said. "They can execute tasks, write code, make API calls." That's the problem. Each new capability increases the "blast radius" if something goes wrong. At Trustly, McCall isn't just sounding the alarm. She's drafting the blueprint for how to build internal controls before regulators inevitably catch up, proposing what she called "bounded autonomy," which is a principle rooted in layered governance, precise scoping and the preservation of human agency at crucial decision points. "Can your agent initiate invoice creation but not approve disbursement without human review?" McCall asked. "What's the scope? What are they allowed to do and what are they not allowed to do?" The key to this is infrastructure. McCall said she recommends isolated containers for agent operations, sandbox environments, time-based privileges and hard kill switches to "pause agent action at critical thresholds." That includes high-value transactions, cross-border transfers and new vendor engagements -- all of which carry financial and legal risk. "We've got to retain human accountability," McCall said. "You've got to treat these AI agents as non-human actors with unique identities in your system. You need audit logs, human-readable reasoning and forensic replay." When asked whether the current regulatory environment is equipped to deal with these advances, McCall's answer was unequivocal. "No, there really isn't anything that's emerging [yet]," she said. "Most things out there are bespoke and patchwork." But that doesn't mean companies are off the hook. If anything, McCall said she believes they have a greater responsibility to anticipate the compliance issues ahead. "Because it's a new thing, people think there are no regulations around it," she said. "There are regulations -- you just have to think about how they apply." From PCI-DSS to the Gramm-Leach-Bliley Act to cross-border data flows governed by GDPR and anti-money laundering statutes, McCall said the landscape is already filled with obligations. It's just a matter of seeing them through a new lens. "You trigger a lot of compliance and data privacy rules when you do that," she said, referring to AI agents that access bank accounts or send international payments. McCall said automation should never be mistaken for abdication of responsibility. "Would I let a junior lawyer operate unsupervised in certain areas?" she asked. "No. You don't let your AI do that either -- unless it's been audited, sandboxed, logged and tightly governed." She also flagged the importance of ensuring that AI systems do not quietly evolve into systems of opaque decision making, warning that "explainability must be designed in, not patched on." As companies pursue agentic AI, McCall's insights offer a kind of unofficial field manual for the C-suite. For legal and compliance leaders especially, the time to take ownership of this transformation is now. "The chief legal officer's role becomes even more important to translate complex AI behavior and inform the organization about legal exposure, mitigation strategies and the ethical boundaries we must respect," she said. In an age of intelligent machines, McCall is making the case that intent, transparency and recoverability must remain core business values -- because in the end, even when an agent acts, it's still the company that must answer for it.
[3]
Payments Execs Say AI Agents Give Payments an Autonomous Overhaul
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The latest wave of artificial intelligence (AI) isn't about better chatbots or faster content generation. It's about delegation. The emergence of agentic AI is the story of an innovation moving beyond just automation. These secret AI agents are being built into autonomous software workers capable of making decisions, executing tasks and collaborating in ways that were once the exclusive domain of humans. And in financial services, payments and B2B commerce, the utilization of agentic AI is already underway. The depth of insights from recent conversations with payments industry leaders for the "What's Next In Payments: Secret Agents" series reveals four defining themes foundational to how the marketplace is thinking about the impact of agentic AI applications across payments: The AI agents are here. They don't sleep. They don't get tired. And, increasingly, they don't need to be told what to do next when it comes to solving real-world business problems. The only question left is whether companies are ready to hire them. Across sectors, AI is evolving from passive support (e.g., content generation, search) into autonomous agents that execute complex workflows, make decisions and interact with systems in real time. These aren't just advanced macros or rule-based systems. Today's AI agents observe, learn, act and reason, often without direct supervision. "We now actually think of agents as one of the boxes in our org chart," i2c CEO and founder Amir Wain told PYMNTS, noting that it's a philosophical shift. "It's not about being fancy. It's about being effective," Wain said. These AI agents are showing the most potential across contact centers, developer workflows, payment orchestration, underwriting systems and fraud detection units. "We're quite bullish on agentic checkout and agentic commerce," Nabil Manji, SVP head of FinTech growth and financial partnerships at Worldpay, told PYMNTS. "We're actively using AI to improve our customer onboarding and underwriting journey -- something the whole financial services sector has been trying to crack for years." With great autonomy comes great responsibility. As AI agents gain power, the accompanying risks grow, too. Unlike traditional AI systems with deterministic outputs, agentic AI is probabilistic, non-linear and unpredictable by nature. This can increase the governance burden. "This isn't a technical upgrade. It's a governance revolution," Kathryn McCall, chief legal and compliance officer at Trustly, told PYMNTS. "You're messing with people's money here." "You've got to treat these AI agents as non-human actors with unique identities in your system. You need audit logs, human-readable reasoning and forensic replay," McCall added. "Can your agent initiate invoice creation but not approve disbursement without human review? What's the scope? What are they allowed to do and what are they not allowed to do?" Forward-looking companies are embedding trust into their tech DNA, building in guardrails like explainability, human oversight and ethical fail-safes to avoid costly missteps and ensure accountability. "Payments is a zero-error industry," Boost Payment Solutions Chief Operating Officer Illya Shell said. "Taking calculated risks is okay. But they've got to be very thoughtful and meticulous." After all, whether it's autonomous B2B payments or AI-powered budget assistants, success hinges on ensuring end users (and internal teams) believe in the system. But the shift isn't plug-and-play. Agentic AI is exposing serious infrastructure limitations, especially in payments and finance. AI agents demand real-time, scalable, secure infrastructure. Legacy systems can't handle thousands of concurrent autonomous agents acting on APIs, analyzing data and triggering actions across systems. "People underestimate what it takes to support this at scale," Edwin Poot, chief technology officer at Thredd, said. "You'll deploy agents per transaction -- this will require changes to the infrastructure." Companies are rebuilding platforms for agent orchestration, simulation environments, serverless compute, federated data access and tokenization. Stax Chief Technology Officer Mark Sundt told PYMNTS that if agentic AI is the engine, orchestration is the transmission. Without a central conductor, even the most capable agents act in isolation. Agentic AI is being battle-tested first in the payments and commerce ecosystem, especially for fraud detection, cross-border transactions, B2B automation, customer engagement and personalization. "Cross-border payments aren't optimized," Boost's Shell said. "But agentic AI can help us streamline the front end to ensure we know exactly who's paying whom, and that the payments are geared in the right way to get out the door quickly." Agentic AI is also forcing a complete rethink of how online commerce happens. These agents don't click on banner ads or fall for marketing copy. They optimize for price, shipping and past behavior. "When AI agents shop on your behalf, they don't see advertisements the way humans do," said i2c's Amir Wain. "They just look for real value." This space offers rich data, real-time needs, and high-stakes outcomes -- making it the ideal launchpad for agentic AI's full capabilities.
Share
Share
Copy Link
AI agents are transforming fintech operations, particularly in risk management, compliance, and governance. This shift introduces new complexities and challenges, requiring innovative approaches to oversight and control.
The fintech industry is witnessing a significant transformation with the integration of AI agents into core operations. These intelligent systems are not just answering questions but are actively making decisions, writing emails, submitting reports, and coordinating across departments
1
. This shift is particularly evident in risk management, compliance, and governance, where AI agents are taking on more responsibility and introducing new layers of complexity.Source: PYMNTS
The speed and autonomy of AI agents present unique challenges to traditional governance models. These agents operate independently and at high speeds, making post-hoc review ineffective
1
. The fintech industry is now grappling with questions of responsibility and accountability when AI agents make decisions in milliseconds based on probabilistic reasoning and limited visibility.To address these challenges, the industry is developing new oversight mechanisms:
Judge Agents: These are oversight agents that actively participate in decision-making, evaluating behavior, monitoring risk thresholds, and enforcing escalation protocols
1
.Governance by Design: Companies are shifting from static rules to dynamic enforcement, designing systems where every action is traceable and decision logic is transparent
1
.Bounded Autonomy: This principle involves layered governance, precise scoping, and preserving human agency at crucial decision points
2
.Regulators are now demanding real-time explainability and outcome traceability from fintech companies. This shift requires systems to provide detailed insights into decision-making processes, including data used, constraints applied, and alternatives considered
1
.Source: PYMNTS
The integration of AI agents is reshaping organizational structures. Some companies are now considering AI agents as part of their org chart
3
. This shift is elevating the role of humans to setting strategy and guardrails while agents handle execution and real-time compliance1
.Related Stories
Fintech teams looking to implement AI-native governance can take several steps:
2
.2
.As AI agents continue to evolve, they are expected to play increasingly significant roles in areas such as fraud detection, cross-border transactions, B2B automation, and customer engagement
3
. The success of these implementations will depend on the industry's ability to balance innovation with robust governance and maintain trust among users and regulatory bodies.Summarized by
Navi
[1]
1
Business and Economy
2
Technology
3
Business and Economy