7 Sources
7 Sources
[1]
Unsecured AI agents expose businesses to new cyberthreats
Whether starting from scratch or working with pre-built tools, organizations must build security, interoperability and visibility into their AI agents. The modern workforce is undergoing a rapid transformation. Organizations are deploying artificial intelligence (AI) agents across an increasing number of business functions - from development to sales and customer service, research, content creation and finance. These autonomous AI systems can make decisions and create plans to achieve complex tasks with minimal supervision by people. And companies are quickly moving these AI agents from prototype into production. As a result of this accelerated deployment, the volume of non-human and agentic identities is now expected to exceed 45 billion by the end of this year. That's more than 12 times the approximate number of humans in the global workforce today. Despite this explosive growth, only 10% of respondents to an Okta survey of 260 executives report having a well-developed strategy for managing their non-human and agentic identities. This poses a significant security concern, considering 80% of breaches involve some form of compromised or stolen identity. And generative AI escalates this threat by enabling threat actors to conduct even more sophisticated phishing and social engineering attacks. As businesses race to deploy agents, it's critical they establish identity controls and prioritize security from the start. This will help organizations avoid significant risks of over-permissioned and potentially unsecured AI agents. To protect against the speed and complexity of AI, businesses need a new approach: An identity security fabric. This new category secures every identity - human, non-human and agentic - across every identity use case, application and resource. This approach is key to protecting businesses in a future driven by AI. Threat actors have been quick to leverage AI for malicious activity, using it to make existing threats more dangerous and to manufacture new, more personalized ones. Generative AI is already powering malware, deepfakes, voice cloning and phishing attacks. The advent of AI agents introduces a new layer of complexity to the enterprise security landscape. Trained on valuable and potentially sensitive company data, these agents can become new attack vectors if they're not built, deployed, managed and secured properly. Organizations are incentivized to grant agents access to more data and resources to make them more effective, but with expanded access comes increased business risk. Threat actors could manipulate AI agent behaviour through a prompt injection attack, for example, where they use probing questions to attempt to trick the agent into sharing privileged information. The more access an AI agent has, the easier it is for threat actors to infiltrate a company. This can potentially lead to data leaks, unauthorized actions or a full system compromise. Because AI agents need to access user-specific data and workflows, each one requires a unique identity. Without sufficient controls, these identities stand to have too much access and autonomy. As "human" as these agents may sometimes seem, managing their identity is fundamentally different from managing that of a human user. Non-human and agentic identities have several distinctions. Today, when a new employee onboards at a company, there's a clear starting point for when that user needs access to company applications and data. They can use passwords, biometrics or multi-factor authentication (MFA) to log in to an account and validate who they are. But AI agents can't be authenticated like human employees. Instead, they rely on things like application programming interface (API) tokens or cryptographic certificates to validate themselves. The lifecycle of an AI agent is also uniquely non-human. Agents have dynamic lifespans, requiring extremely specific permissions for limited periods of time and often needing access to sensitive information. Organizations must be prepared to rapidly provision and de-provision access for agents. Agents can also be more difficult to trace and log than their human counterparts, which complicates post-breach audits and remediation efforts. These factors collectively make it critical for security teams to govern AI agents and their permissions carefully. Most organizations are still early in their agentic AI journeys. This presents an opportunity to establish proper identity and security protocols from the outset. For organizations deploying third-party agents, there's no better time than during adoption to lay the groundwork for secure identity. When building agents from the ground up, identity should be prioritized during development. Whether an organization is starting from scratch or working with pre-built tools, there are several key identity considerations for autonomous AI agents: The autonomous nature of AI agents means they can chain together permissions to access resources they shouldn't. Security teams need granular access policies to ensure agents aren't sharing any sensitive information. AI agents should only have access and authorization to resources for certain periods of time. Organizations must ensure AI agents align to standards for interoperability. Agents are more powerful when they can connect with other agents and AI systems, but teams can't sacrifice security along the way. Standards like Model Context Protocol (MCP) provide a framework for agents to securely connect to external tools and data sources. Without clear insights into the actions and access patterns of these agents, anomalous behaviours can go unnoticed, potentially leading to security vulnerabilities. To mitigate these risks, organizations need comprehensive monitoring and auditing capabilities to track agent activity and maintain control. Organizations are still only scratching the surface of the agentic AI future. And it's important to remember that building and deploying an AI agent is only the first step in the security journey. As the number of use cases continues to increase, so will the responsibilities of organizations' security teams. It takes an ongoing commitment to visibility, governance and control to ensure AI agents are working securely and as intended. With a strong foundation of secure identity, organizations can begin safely scaling their agentic deployments and empower more users to reap the benefits and unlock the business potential of AI tools.
[2]
"AI security is identity security" - how Okta is weaving agents into the security fabric
It's pretty clear now that the next direction for AI is in Agents, with recent Okta research claiming the technology is now in use by a staggering 91% of organizations in some capacity. Despite this widespread use though, only 10% of those surveyed reported having a 'well developed strategy or roadmap for managing non-human identities' - highlighting the worrying security deficit left as companies rush to make the most of new technology. But Okta has a mission to address this, and at its Oktane 2025 event, the company, together with Auth0, is introducing a new set of security principles to 'seamlessly integrate' AI agents into the identity security fabric for end-to-end security - so organizations can take advantage of the productivity gains without fear of exposure. TechRadar Pro spoke to Auth0 President Shiv Ramji and heard from Okta CEO and Co-founder Todd McKinnon to find out more. "Everyone is talking about AI, AI, AI - Agents are all the rage," Ramji points out, but noted very few companies have sufficient guardrails against potential breaches. The purpose of these key new features is to unify security for the new age of AI agents, all within the Okta platform - and they comes in three forms. The first is 'Okta for AI Agents' - which allows for the seamless integration of AI agents into the identity security fabric. This helps users identify potential risks regarding their agents and provides visibility into their activity - all in one centralized platform with controls to manage access and automated governance. There are four facets to this; detection, provision, authorization, and governance. With Identity Security Posture Management (ISPM), organizations can discover any potential service account risks, giving them a chance to be proactive against the threat. Pretty much what they say on the tin, provision and authorization allow users to classify risks for non-human identities (NHIs) and enforce security policies with the principle of least privilege - giving AI agents access only when they need it. Governance protocols look to control the risk of 'agent sprawl' - where agents move without solid framework or oversight. Tracking these NHIs is made a whole lot easier with Okta Identity Governance - which provides 'comprehensive audit trails and activity logging for all agent actions and decisions.' This is important - particularly given that agents have become a major blindspot in cybersecurity defenses in many cases, with an inherent lack of security intuition and of course, no cybersecurity training. The second, and perhaps most impactful feature, is 'open, industry-leading standards for AI agents' with Cross App Access (XAA). This extends OAuth to secure app-to-app interactions across the organization, and is supported by industry leaders like Google Cloud and Salesforce. "It's focused on security and access," explains McKinnon. "It lets IT and security teams set the access policies upfront for these AI agents, which makes it open and transparent and visible to everyone involved." This is particularly important to elevate the industry standard and establish protocols to keep security teams ahead of threat actors across the world. This open standard is 'the key to shaping the future of identity in the age of AI' "Open industry-leading standards like Cross App Access help everything in your fabric from the identities down to the resources, making sure they all speak the same language - and the Auth0 platform makes it incredibly easy to build fabric-ready agents and agentic systems," McKinnon says. XAA is set to become available with, 'out of the box support in Auth0, enabling B2B SaaS developers to build applications and AI tools that can natively participate in the protocol.' Verifiable Digital Credentials (VDC), planned to become available in 2027, are aimed at establishing trust in AI agents and combatting AI fraud. This enables developers to build AI agents with security front and centre, enabling organizations to 'issue and verify tamper-proof, reusable identity data - like government IDs, employment records, or certifications.' "The thing about AI agents is that they're always on" Ramji explains, "They can take any prompt and can go access any information. So there are a lot of security concerns with that." He gives an example of an agent tasked with booking travel. You give the agent the dates, location, budget, and preferences. It might not seem too complicated, but to do this, the agent needs access to a swath of personal information - from calendar access, credit cards, hotel rewards programs - and permissions to action the bookings. "When I provision that agent to do that, first the agent needs to know that it's doing this on my behalf. So I need to be authenticated, and of course the agent has to be authenticated too." This needs to be specific and fine tuned. Your agent might need your credit card to book the hotel, but it doesn't (and shouldn't) know how much money you have or your spending history. "Whether you're building an agent or you're building agentic services, which is something that an agent talks to, you can make sure that they're fabric ready out of the box with the right levels of security and the right level of visibility," McKinnon announces. These new features were well received at Oktane. James Simcox, CTO of fintech company Equals Money told us he was most excited about Okta for AI Agents so that his employees can safely experiment with new tools, "We use AI agents internally ourselves right now, and there's some we have that are approved," he explains, "But I also know that our staff are doing lots of things they're not supposed to be doing because they found this cool AI tool on Reddit - they really want to use it. And we don't know they're using it, right? So being able to report on that is really important for us." The overarching message from Okta is; Security, visibility, and governance - and the hope for these new features is not just to protect customers, but to elevate the security posture of the whole industry and beyond.
[3]
Companies are sleepwalking into agentic AI sprawl
Agentic AI is multiplying inside enterprises faster than most leaders realize. These intelligent agents can automate processes, make decisions, and act on behalf of employees. They're showing up in customer support, IT operations, HR, and finance. The problem? One rogue agent with access to your ERP, CRM, or databases could wreak more havoc than a malicious insider. And unlike a human threat, an agent can replicate, escalate, and spread vulnerabilities in seconds. The business benefits are real, but many organizations are rushing ahead without the foundations to contain risk. In chasing speed, they may be trading innovation for unprecedented security threats, runaway costs, and enterprise-wide crises. The illusion of AI readiness Leaders often believe they're ready for AI adoption because they've chosen the "right" model or vendor. But readiness isn't about software, it's about infrastructure. While many organizations are still stuck in "experimentation mode," the most advanced players are moving aggressively. They are building agent-first systems, enabling machine-to-machine communication, and restructuring their APIs and internal tooling to serve intelligent, autonomous agents -- not humans. There are four phases to our AI Maturity and Readiness model: Exploration & Ideation, Efficiency & Optimization, Governance & Control, and finally Innovation & Transformation. To support agents responsibly, and reach the final phase of maturity, organizations need: * Governance: clear policies and oversight * Discoverable APIs: machine-readable blueprints, not PDFs * Event-driven architecture: so agents react in real time * Proactive controls: rate limits, analytics, and monitoring from day one Without these, AI can't deliver value -- only vulnerability. And one rogue agent can quickly put a company out of control unless the right set-up is in place. The rogue agent problem It's not the number of agents that matters. It's their scope. Imagine a developer creating an agent with broad access across CRM, ERP, and databases. That single agent could be repurposed into multiple use cases -- like a Slack bot -- turning convenience into a critical vulnerability. This is the new insider threat: faster proliferation, more connections, and less visibility. An identity crisis at machine speed Another overlooked challenge is identity. Human and application identities are well understood, but agent identities are new and unsettled. Today, enterprises simply can't securely manage millions of agent identities in real time. Standards are still catching up, leaving organizations exposed. And when credentials leak at machine speed, the damage can be immediate and catastrophic. Best practices are emerging: avoid hardcoded credentials, scope access tightly, and ensure revocations cascade across systems. But most companies aren't there yet. Agent sprawl and exploding bills Even without breaches, costs can spiral. Agents are easy to create but hard to track. Teams spin them up independently, leading to overlaps, redundancies, and runaway API calls. In some cases, agents loop endlessly, overloading systems and sending cloud bills skyrocketing. This isn't a minor side effect's governance failure. Guardrails like quota enforcement, usage analytics, and rate limiting aren't optional extras. They're the only way to keep systems and budgets intact. APIs: A weak link in the agentic AI chain Every AI agent depends on APIs. Yet most APIs weren't built for autonomous machines, they were built for developers. Without governance, authentication breaks down, rate limits vanish, and failures multiply. The solution is centralized API management. Gateways that enforce consistent authentication, authorization, and logging provide the predictability both humans and agents require. Without this, agents are flying blind. Autonomy vs. control Agentic AI's promise is autonomy: self-directed systems that can take action without human oversight. The model that works is borrowed from platform engineering. Over the last decade, many companies have adopted platform teams to provide standardized, compliant tools that empower developers without sacrificing control. Agentic AI requires the same approach: centralized, compliant platforms that provide visibility and security while allowing teams to innovate. Building the guardrails: Agent management and protocols The path to a secure and effective agentic future requires dedicated solutions. Centralized AI Agent Management is paramount. This includes AI Gateways, which control agent API calls, enforce security rules, and manage rate limiting to prevent system overload. It also involves Agent Catalogs, searchable directories that list every agent, its function, owner, and permissions, preventing redundant development and providing a clear map for security and compliance teams. Monitoring and observability dashboards are crucial for tracking agent activity and flagging unusual behavior. To address the inherent chaos of unstructured inter-agent communication, the Agent-to-Agent (A2A) protocol, an open standard introduced by Google, is vital. A2A brings structure, trust, and interoperability by defining how agents discover each other, securely exchange information, and adhere to policy rules across diverse environments. Platforms like Gravitee's Agent Mesh natively support A2A, offering centralized registries, traffic shaping, and out-of-the-box security for agent fleets. The human dimension Technology isn't the only barrier. There's a cultural one, too. Many employees are already experiencing "transformation fatigue" from years of digital change initiatives. If agentic AI is rolled out without trust, transparency, and training, adoption will falter and resistance will grow. Leaders must strike a balance: make AI useful at the frontline while ensuring compliance at the center. That alignment between executive mandate and employee ownership will determine whether deployments succeed or collapse. Wake up before the breach Agentic AI isn't on the horizon -- it's already multiplying inside your company. Without governance, observability, and identity controls, organizations risk trading short-term productivity for long-term crises. The companies that succeed won't be the fastest to deploy agents. They'll be the ones that deploy them responsibly, with architectures built for scale, safety, and trust. The choice is clear: wake up now, or keep sleepwalking until the wake-up call comes in the form of a breach, a blown budget, or a board-level crisis. Gravitee is hosting an A2A Summit for leaders navigating agentic AI on November 6, 2025, in NYC, in partnership with The Linux Foundation. The event will explore the future of agent-to-agent (A2A) orchestration and autonomous enterprise systems, bringing together technology leaders from Gartner, Google, McDonald's, Microsoft and others to provide actionable insights to help organizations tackle agent sprawl and unlock the full potential of AI-driven decision-making. Learn more here. Rory Blundell is CEO at Gravitee. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they're always clearly marked. For more information, contact [email protected].
[4]
AI security is ID security - why Okta CEO Todd McKinnon says you can't succeed in one without the other
I think it's a safe bet to say that Todd McKinnon, CEO of identity management specialist, is not alone in this fear of missing out. It's almost certainly not a healthy state of mind to have, but given the AI hype cycle of the past 18 months or so, it's probably unavoidable in a lot of cases. And it goes on as he explained at last week's Octane user conference: I feel like I should just be doing more. I should be doing more with Okta. I should be leading. We should be adopting more AI and we should be putting in our products more and more and more. I feel like if I haven't built a company worth $1 trillion or if I haven't built a $500 billion data center. It's like what am I doing with my life? Having struck a note of introspection, McKinnon has thought through his situation and come up with a personal diagnosis: I thought about what's going on here? What's the tension? And it's really this - Okta has spent the last few years on this journey to be the most secure company in the world. That's really driven our priorities and what we focused on and what we've invested in. And we have to figure out how we do both, how we innovate with AI and how we continue on this journey to be the most secure company in the world. It's a tension that we all face every day. Balance is key, he suggests, as it is in security matters: It's easy to fall in the trap of one extreme, either complete lockdown mode and totally focused on security, or the other way where you're being fast and loosened innovation at all cost. We all know that we have to do both. We have to balance. We have to strike the right balance. We have to innovate and be secure. Every company struggles with this - don't feel bad about it, don't feel guilty. We have to figure out how to do both. Okta has had what he calls "this key revelation" which he pitches as "the unlock to both" - the Okta Secure Identity Commitment, formally launched two years ago and now with "2 million hours in two years" when measured in "blood, sweat and tears" put into it: The Okta Secure Identity Commitment has four pillars from building industry best products and making sure those products are secured by default to hardening our corporate infrastructure, making sure it's the most secure in the world, to championing customers' best practices because if it doesn't all work for you and it's not easy to adopt and deploy and get value from, it doesn't work. And finally to elevate the whole industry in the fight against these attacks. And the nature of those attacks continues to evolve, with the rise of agentic AI as the latest front in the ongoing security battle. McKinnon points to a recent break of an AI agent to make his point: This AI agent was used to automate marketing. So companies use this agent and it sat on their website and it helped prospects learn about the company and create sales leads and automate the marketing process. The company that builds this agent was hacked and the hackers got the access tokens that this agent used to connect to the SaaS application of hundreds and hundreds of companies, hundreds and hundreds of SaaS applications on hundreds and hundreds of companies. Okta itself is a customer of the unnamed company in question, he notes, adding that the firm was not impacted by the recent breach. But he adds: It is an example of what can happen in our industry without the right security for AI agents. Think about this breach. This is an AI agent. If every agentic system has breaches like this, AI is not going to reach its full potential. It's not going to happen. People are going to be scared of it, companies are going to be afraid to adopt it. We have to fix this problem. We have to elevate the industry. There has to be some re-thinking going on, he suggests: This AI agent and many other AI agents, they are a powerful new identity type. They can act independently on their own or on behalf of a user or a team or a company. They can access tools, applications, data. They can plan and complete tasks on their own. They're kind of like a piece of software, kind of like a system account, kind of like a person somewhere in between. And the pace here of innovation is absolutely stunning. We all see it. So it's not surprising that many [companies] are making AI agents the #1 priority in your entire technology investment. And now these AI agents and the potential here and the potential benefits, they are getting very, very powerful. It's happening very quickly. If you think about just five years ago, the complexity of a task that an agent could complete would be something that would take you about nine seconds. Think about adding the last sentence to your e-mail. Now just in five years, it's dramatically different. AI agents can complete tasks on their own that would take you two hours. Think about kind of a medium complexity support issue, where you have to look at the support database and interact with the customer and then solve that issue. It all comes back to data, he posits: Key here as this improvement happens is that agents need access to more and more data. More and more data, more and more access, which means it's very important that these AI agents have an identity in the sense we talk about identities. And that means that without identity security, AI security collapses. There's enormous risk here, warns McKinnon: This isn't some abstract concept that a CEO of an identity company is trying to scare you about. This is happening now. The risk is real now, and we're seeing instances of this every day. One of the world's best-known restaurant chains, they implemented an AI agent. This agent sits on their website and helps job applicants who want jobs at the restaurants learn about the positions, the person gives information to the agent, the agent lets the person apply for a job, automates this important process. It's an important process. This company needs great people to work in their restaurants. They implemented the agent in such a way that attackers could trick it into disclosing the password for the back-end administrator account that the agent connected to. And guess what the password was... But the risk factor isn't doing anything to slow down the pace of AI adoption or lessen the FOMO among senior decision-makers, reckons McKinnon, and that's putting pressure on tech staff: The CEO of this company, the Board of Directors of this company, is pushing the team to adopt AI. 'What are we doing with AI?' Has anyone heard this? 'What are you doing with AI? Adopt AI!'. And so these hard-working smart people built this great AI agent, and they probably took it to a meeting and said, 'Look what we have'. So this team took what they had and the boss said, 'Put in production. Now'. And the teams look at each other like,'I'm not sure it's ready'. 'What do you mean it's not ready? We're going AI!' So they put it in production. And so it's not a surprise that this happens. And now the threat actors have access to 64 million records about chats and conversations and personal information. So what's to be done? Well, from Okta's point of view, the answer lies in the direction of its Identity Security Fabric offering. McKinnon explains: It transcends previous identity categories. The goal here is very simple. The goal here is zero identity-based attacks. It's a unified approach that's deployed across every identity type, employees, customers, nonhuman identities, partners, contractors, every identity use case, governance, privileged access management, access management, threat protection, posture management. These are not individual products. They're really features of a bigger category, and it's integrated across every resource, apps, infrastructure, databases, APIs, everything, no gap, no wedge to sneak into. At present there's a particular focus on agentic security it seems: For us, securing AI agents is just like securing any other type of identity and it's what we were built to do. McKinnon picks out three features for attention: The first is how you can bring your Identity Security Fabric to life by bringing AI agents into the Okta platform. Second is how to strengthen the Identity Security Fabric with open industry-leading standards for AI agents. And third, how you can easily build Fabric-ready agents with the Auth0 platform. The platform nature of the offering is also important as McKinnon pitches: There's a couple of really pretty misunderstood things about identity. The first one is that the amount of complexity at customer sites around the number of identity tools, it's quite staggering. Even a small company has 20, 30, 40, 50 different identity vendors. There's tremendous value in consolidating that and normalizing that. We think we have the right platform to do that to implement this Fabric. The second thing about identity technologies is it's a lot of effort and a lot of work traditionally to get them fully deployed... We're trying to make it easier to deploy these identity management tools and let people consolidate and let people simplify. The underlying message is a simple one, he attests: AI security is identity security. You can't be successful in one without the other An important reminder of one of the challenges of the riding the agentic revolution - and one that enterprises need to take into account and plan for up front, not after there's been a security breach!
[5]
Identity security: The new boundary in the AI era - SiliconANGLE
Artificial intelligence isn't just reshaping applications: It's redrawing the boundaries of identity security. As conversational agents take center stage, identity emerges as the defining layer where innovation and security converge. Developers now face a landscape where application programming interfaces are the primary surface for business logic and uncontrolled agents can interact with critical systems. This architectural shift raises urgent questions about how to protect data, enforce permissions and maintain customer confidence, according to Shiv Ramji (pictured), president of Auth0 at Okta Inc. "That very nature of how AI applications are built or what generative AI and agents can do, you have to rethink how you build your applications because you can no longer control the access point inside your application," he told theCUBE. "In fact, I bet you a year from now, when you use the American Airlines application and if it has a natural language interface to it, it's going to look very different. You'll be able to do a whole lot more things than what you can do today." Ramji spoke with theCUBE's Rebecca Knight and Jackie McGuire at Okta's Oktane event, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how AI is changing application development, why identity has become the new security boundary, and the fundamental principles developers need to build trusted and scalable apps. (* Disclosure below.) The pressure to innovate fast with AI must be balanced against securing agents, data and transactions. That tension is driving renewed attention to identity security fundamentals, such as authentication, fine-grained authorization and standards, that make cross-application access seamless, according to Ramji. "There are four key requirements across all of these industries that kind of are common," he said. "First is very basic ... you have to authenticate and validate who you are; we have to do that for agents. But there is something unique here: We need to make sure that not only is the agent authenticated, but the agent is authenticated on behalf of you. That linkage is super important. The second thing is ... an agent will have to connect to 30, 40, 50 different applications securely." Frameworks and standards reduce the burden on developers who want to move quickly without compromising identity security. By embedding identity security expertise into APIs and services, companies can accelerate innovation while shielding builders from common mistakes, according to Ramji. "Builders and engineers and developers, they are not security experts, and I don't think it's fair for us to expect them to be," Ramji said. "This is where the capabilities that we're building [come in]: We're taking that burden away. We are identity experts; we know how to secure tokens, vault them and scope them. We're going to take that burden, and then we'll make it really easy for a developer to use our products and APIs."
[6]
Zero-trust security frontline defense in the AI era - SiliconANGLE
Zero-trust security meets agentic AI in the next identity battleground Zero-trust security and artificial intelligence are colliding in ways that are fundamentally changing the nature of enterprise identity. The rise of agentic AI introduces a new class of identity that combines autonomy, broad access and scale, demanding rethinking of traditional frameworks. Organizations are navigating this shift with urgency, as early missteps are already creating real-world security incidents. That urgency underscores why identity governance now sits at the heart of AI readiness, according to David Bradbury (pictured), chief security officer of Okta Inc. "For years we've been securing human identities, and in the past few years we've started to refocus and start to look at application identities and non-human identities," Bradbury said. "This year is a pretty big year for welcoming a new entrant into the workforce, which is the autonomous agentic AI agent. When you think about the three different key features of zero trust, the fact that you need a secure identity, you want to be able to implement least privilege, you want to be able to continuously monitor what they're doing, all three of those elements apply equally to all of those identity types. But specifically when you think about agentic AI, it is absolutely critical to get those things right if you are ever going to manage and govern those things." Bradbury spoke with theCUBE's Rebecca Knight and Jackie McGuire at Okta's Oktane event, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how zero-trust security is becoming essential to manage the risks of AI agents and protect identities in a rapidly evolving threat landscape. (* Disclosure below.) The pressure to adopt AI has created a climate where companies feel compelled to deploy quickly, often at the expense of safeguards. The boardroom mandate to "do more with AI" has translated into rushed deployments where common missteps surface, Bradbury noted. The most glaring involve reintroducing outdated practices that should have been left behind. "Sadly, we've seen people cutting corners on security," he said. "We're using usernames and passwords hard-coded in agents to talk to other services. We're using static API keys, API keys that have gone the way of the dodo over the past decade. We've brought them back to life and we're putting them in agents." Authentication is only one piece of the equation. The proliferation of tokens across dozens of agents and services creates new risks at scale. Without modern approaches to manage and safeguard these tokens, enterprises risk compounding their exposure with every integration. That identity platforms must now help developers secure authentication while containing token sprawl, Bradbury explained. "Token proliferation is going to be a really big challenge because we're all going to have dozens of these agents and each agent wants to connect to everything," he added. "It's going to pop up and ask you to approve Google Drive access, Slack access, Zoom access, so many different applications it wants to be able to talk to." Managing access contexts is another frontier. As AI agents move sensitive data between systems, it becomes critical to distinguish which information should remain isolated and which can be shared. Developers must embrace fine-grained controls to prevent overexposure while maintaining the fluidity that makes these systems valuable, according to Bradbury. "It's all about context and MCP the protocol; it's all about context and passing context around," he said. "MCP is a pipe; it's created to allow you to connect to things. It's not worrying about the content of that pipe and making sure that when I'm pulling from that sensitive system, passing that whole context to the calendar management app may not be the right thing to do." The speed of change is unlike anything security teams have faced before, with adversaries innovating in parallel. Phishing attacks, malware leveraging local large language models and pixel-perfect spoofed websites all underscore how quickly tactics evolve. Bradbury's call to action is clear: The industry must share discoveries openly and collaborate to keep pace. "Because this is moving so dynamically, it's moving so quickly, it's incumbent upon all of us to share this stuff when you see it," Bradbury explained. "If you are seeing a new novel attack, a new tactic, you need to make sure you get that out in blogs, in Slack groups, in trust groups. Get it out there so that we can consume it and react to it, because everything is moving so fast." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Okta's Oktane event:
[7]
Okta expands identity fabric with AI agent lifecycle security, Cross App Access and verifiable credentials - SiliconANGLE
Okta expands identity fabric with AI agent lifecycle security, Cross App Access and verifiable credentials Identity access management company Okta Inc. today announced new capabilities across the Okta Platform and Auth0 Platform designed to help enterprises securely adopt artificial intelligence agents. The new capabilities allow organizations to build secure, standards-first AI agents that can be woven into an identity security fabric for end-to-end lifecycle management. The fabric also enforces verifiable trust through tamper-proof credentials, helping organizations prevent AI-powered fraud and streamline onboarding processes. Up first is the introduction of Okta for AI Agents, a new platform that integrates AI agents into an identity security fabric, enabling organizations to discover, provision, authorize and govern nonhuman identities at large scale. Okta for AI Agents includes Identity Security Posture Management for identifying risky agents and exposed credentials, Universal Directory for agent registration and ownership attribution and Okta Privileged Access for enforcing least-privilege access. The platform is complemented by Okta Identity Governance and the company's AI-driven Identity Threat Protection to provide continuous monitoring, audit trails and automated remediation. The second announcement sees the introduction of Cross App Access or XAA, an open standard that extends OAuth to secure agent-driven and app-to-app interactions. Launching with early support from Automation Anywhere Inc., Boomi Inc., Box Inc. and Glean Technologies Inc., XAA shifts access control from individual apps to the identity layer, giving enterprises real-time visibility and policy-based enforcement. Okta says the protocol also reduces user friction by pre-approving integrations, minimizing repeated consent prompts. It will be embedded in Auth0 for business-to-business developers to simplify secure integration of AI agents into applications. The final announcement, planned for the company's 2027 fiscal year, is Verifiable Digital Credentials, an open standard offering that will allow organizations to issue and verify tamper-proof, reusable identity data such as government IDs or certifications. VDCs, when launched, will allow individuals to digitally prove their identity or eligibility across applications while limiting exposure to AI-driven fraud and deepfake threats. An initial digital ID verification capability supporting mobile driver's licenses is slated for early availability in fiscal 2026. "AI is changing the workplace faster than organizations can adapt. We're starting to see poorly built, deployed, or managed agents expose the risks of using a traditional patchwork of identity solutions," said Kristen Swanson, senior vice president of design and research at Okta. "The modern enterprise requires an identity security fabric that can unify silos and reduce the attack surface. Our latest innovations weave agents into that fabric to manage their entire identity lifecycle, leveraging open standards like Cross App Access that help elevate the entire industry and create a more secure AI-powered ecosystem."
Share
Share
Copy Link
As AI agents become increasingly prevalent in businesses, they pose new challenges for cybersecurity and identity management. Companies must adapt their security strategies to protect against potential threats while harnessing the benefits of AI innovation.
The modern workforce is undergoing a dramatic transformation as organizations deploy artificial intelligence (AI) agents across various business functions. These autonomous AI systems can make decisions and complete complex tasks with minimal human supervision
1
. The adoption of AI agents is accelerating at an unprecedented rate, with the volume of non-human and agentic identities expected to exceed 45 billion by the end of this year – more than 12 times the number of humans in the global workforce1
.
Source: VentureBeat
Despite the widespread adoption of AI agents, only 10% of executives report having a well-developed strategy for managing non-human and agentic identities
1
. This lack of preparedness poses significant security concerns, especially considering that 80% of breaches involve compromised or stolen identities1
.
Source: diginomica
The advent of AI agents introduces new complexities to enterprise security. Trained on valuable and potentially sensitive company data, these agents can become new attack vectors if not properly secured
1
. Threat actors could manipulate AI agent behavior through prompt injection attacks or exploit their expanded access to infiltrate company systems1
.To address these challenges, businesses need to adopt a new approach: an identity security fabric. This concept aims to secure every identity – human, non-human, and agentic – across all identity use cases, applications, and resources
1
. Companies like Okta are introducing new security principles to integrate AI agents into the identity security fabric seamlessly2
.
Source: TechRadar
Related Stories
Organizations deploying AI agents must prioritize several key identity considerations:
3
.3
.3
.3
.As AI agents become more prevalent, the industry is working towards establishing open standards and protocols to enhance security. Initiatives like Cross App Access (XAA) aim to extend OAuth to secure app-to-app interactions across organizations
2
. Additionally, Verifiable Digital Credentials (VDC) are being developed to establish trust in AI agents and combat AI fraud2
.As Todd McKinnon, CEO of Okta, emphasizes, "AI security is ID security" . Companies must strike a balance between innovation and security to harness the full potential of AI agents while protecting their systems and data. By implementing robust identity security measures and adhering to emerging standards, organizations can navigate the new frontier of AI-driven business processes with confidence.
Summarized by
Navi
[1]
[2]
[3]
[4]
[5]