2 Sources
2 Sources
[1]
Unsecured AI agents expose businesses to new cyberthreats
Whether starting from scratch or working with pre-built tools, organizations must build security, interoperability and visibility into their AI agents. The modern workforce is undergoing a rapid transformation. Organizations are deploying artificial intelligence (AI) agents across an increasing number of business functions - from development to sales and customer service, research, content creation and finance. These autonomous AI systems can make decisions and create plans to achieve complex tasks with minimal supervision by people. And companies are quickly moving these AI agents from prototype into production. As a result of this accelerated deployment, the volume of non-human and agentic identities is now expected to exceed 45 billion by the end of this year. That's more than 12 times the approximate number of humans in the global workforce today. Despite this explosive growth, only 10% of respondents to an Okta survey of 260 executives report having a well-developed strategy for managing their non-human and agentic identities. This poses a significant security concern, considering 80% of breaches involve some form of compromised or stolen identity. And generative AI escalates this threat by enabling threat actors to conduct even more sophisticated phishing and social engineering attacks. As businesses race to deploy agents, it's critical they establish identity controls and prioritize security from the start. This will help organizations avoid significant risks of over-permissioned and potentially unsecured AI agents. To protect against the speed and complexity of AI, businesses need a new approach: An identity security fabric. This new category secures every identity - human, non-human and agentic - across every identity use case, application and resource. This approach is key to protecting businesses in a future driven by AI. Threat actors have been quick to leverage AI for malicious activity, using it to make existing threats more dangerous and to manufacture new, more personalized ones. Generative AI is already powering malware, deepfakes, voice cloning and phishing attacks. The advent of AI agents introduces a new layer of complexity to the enterprise security landscape. Trained on valuable and potentially sensitive company data, these agents can become new attack vectors if they're not built, deployed, managed and secured properly. Organizations are incentivized to grant agents access to more data and resources to make them more effective, but with expanded access comes increased business risk. Threat actors could manipulate AI agent behaviour through a prompt injection attack, for example, where they use probing questions to attempt to trick the agent into sharing privileged information. The more access an AI agent has, the easier it is for threat actors to infiltrate a company. This can potentially lead to data leaks, unauthorized actions or a full system compromise. Because AI agents need to access user-specific data and workflows, each one requires a unique identity. Without sufficient controls, these identities stand to have too much access and autonomy. As "human" as these agents may sometimes seem, managing their identity is fundamentally different from managing that of a human user. Non-human and agentic identities have several distinctions. Today, when a new employee onboards at a company, there's a clear starting point for when that user needs access to company applications and data. They can use passwords, biometrics or multi-factor authentication (MFA) to log in to an account and validate who they are. But AI agents can't be authenticated like human employees. Instead, they rely on things like application programming interface (API) tokens or cryptographic certificates to validate themselves. The lifecycle of an AI agent is also uniquely non-human. Agents have dynamic lifespans, requiring extremely specific permissions for limited periods of time and often needing access to sensitive information. Organizations must be prepared to rapidly provision and de-provision access for agents. Agents can also be more difficult to trace and log than their human counterparts, which complicates post-breach audits and remediation efforts. These factors collectively make it critical for security teams to govern AI agents and their permissions carefully. Most organizations are still early in their agentic AI journeys. This presents an opportunity to establish proper identity and security protocols from the outset. For organizations deploying third-party agents, there's no better time than during adoption to lay the groundwork for secure identity. When building agents from the ground up, identity should be prioritized during development. Whether an organization is starting from scratch or working with pre-built tools, there are several key identity considerations for autonomous AI agents: The autonomous nature of AI agents means they can chain together permissions to access resources they shouldn't. Security teams need granular access policies to ensure agents aren't sharing any sensitive information. AI agents should only have access and authorization to resources for certain periods of time. Organizations must ensure AI agents align to standards for interoperability. Agents are more powerful when they can connect with other agents and AI systems, but teams can't sacrifice security along the way. Standards like Model Context Protocol (MCP) provide a framework for agents to securely connect to external tools and data sources. Without clear insights into the actions and access patterns of these agents, anomalous behaviours can go unnoticed, potentially leading to security vulnerabilities. To mitigate these risks, organizations need comprehensive monitoring and auditing capabilities to track agent activity and maintain control. Organizations are still only scratching the surface of the agentic AI future. And it's important to remember that building and deploying an AI agent is only the first step in the security journey. As the number of use cases continues to increase, so will the responsibilities of organizations' security teams. It takes an ongoing commitment to visibility, governance and control to ensure AI agents are working securely and as intended. With a strong foundation of secure identity, organizations can begin safely scaling their agentic deployments and empower more users to reap the benefits and unlock the business potential of AI tools.
[2]
Companies are sleepwalking into agentic AI sprawl
Agentic AI is multiplying inside enterprises faster than most leaders realize. These intelligent agents can automate processes, make decisions, and act on behalf of employees. They're showing up in customer support, IT operations, HR, and finance. The problem? One rogue agent with access to your ERP, CRM, or databases could wreak more havoc than a malicious insider. And unlike a human threat, an agent can replicate, escalate, and spread vulnerabilities in seconds. The business benefits are real, but many organizations are rushing ahead without the foundations to contain risk. In chasing speed, they may be trading innovation for unprecedented security threats, runaway costs, and enterprise-wide crises. The illusion of AI readiness Leaders often believe they're ready for AI adoption because they've chosen the "right" model or vendor. But readiness isn't about software, it's about infrastructure. While many organizations are still stuck in "experimentation mode," the most advanced players are moving aggressively. They are building agent-first systems, enabling machine-to-machine communication, and restructuring their APIs and internal tooling to serve intelligent, autonomous agents -- not humans. There are four phases to our AI Maturity and Readiness model: Exploration & Ideation, Efficiency & Optimization, Governance & Control, and finally Innovation & Transformation. To support agents responsibly, and reach the final phase of maturity, organizations need: * Governance: clear policies and oversight * Discoverable APIs: machine-readable blueprints, not PDFs * Event-driven architecture: so agents react in real time * Proactive controls: rate limits, analytics, and monitoring from day one Without these, AI can't deliver value -- only vulnerability. And one rogue agent can quickly put a company out of control unless the right set-up is in place. The rogue agent problem It's not the number of agents that matters. It's their scope. Imagine a developer creating an agent with broad access across CRM, ERP, and databases. That single agent could be repurposed into multiple use cases -- like a Slack bot -- turning convenience into a critical vulnerability. This is the new insider threat: faster proliferation, more connections, and less visibility. An identity crisis at machine speed Another overlooked challenge is identity. Human and application identities are well understood, but agent identities are new and unsettled. Today, enterprises simply can't securely manage millions of agent identities in real time. Standards are still catching up, leaving organizations exposed. And when credentials leak at machine speed, the damage can be immediate and catastrophic. Best practices are emerging: avoid hardcoded credentials, scope access tightly, and ensure revocations cascade across systems. But most companies aren't there yet. Agent sprawl and exploding bills Even without breaches, costs can spiral. Agents are easy to create but hard to track. Teams spin them up independently, leading to overlaps, redundancies, and runaway API calls. In some cases, agents loop endlessly, overloading systems and sending cloud bills skyrocketing. This isn't a minor side effect's governance failure. Guardrails like quota enforcement, usage analytics, and rate limiting aren't optional extras. They're the only way to keep systems and budgets intact. APIs: A weak link in the agentic AI chain Every AI agent depends on APIs. Yet most APIs weren't built for autonomous machines, they were built for developers. Without governance, authentication breaks down, rate limits vanish, and failures multiply. The solution is centralized API management. Gateways that enforce consistent authentication, authorization, and logging provide the predictability both humans and agents require. Without this, agents are flying blind. Autonomy vs. control Agentic AI's promise is autonomy: self-directed systems that can take action without human oversight. The model that works is borrowed from platform engineering. Over the last decade, many companies have adopted platform teams to provide standardized, compliant tools that empower developers without sacrificing control. Agentic AI requires the same approach: centralized, compliant platforms that provide visibility and security while allowing teams to innovate. Building the guardrails: Agent management and protocols The path to a secure and effective agentic future requires dedicated solutions. Centralized AI Agent Management is paramount. This includes AI Gateways, which control agent API calls, enforce security rules, and manage rate limiting to prevent system overload. It also involves Agent Catalogs, searchable directories that list every agent, its function, owner, and permissions, preventing redundant development and providing a clear map for security and compliance teams. Monitoring and observability dashboards are crucial for tracking agent activity and flagging unusual behavior. To address the inherent chaos of unstructured inter-agent communication, the Agent-to-Agent (A2A) protocol, an open standard introduced by Google, is vital. A2A brings structure, trust, and interoperability by defining how agents discover each other, securely exchange information, and adhere to policy rules across diverse environments. Platforms like Gravitee's Agent Mesh natively support A2A, offering centralized registries, traffic shaping, and out-of-the-box security for agent fleets. The human dimension Technology isn't the only barrier. There's a cultural one, too. Many employees are already experiencing "transformation fatigue" from years of digital change initiatives. If agentic AI is rolled out without trust, transparency, and training, adoption will falter and resistance will grow. Leaders must strike a balance: make AI useful at the frontline while ensuring compliance at the center. That alignment between executive mandate and employee ownership will determine whether deployments succeed or collapse. Wake up before the breach Agentic AI isn't on the horizon -- it's already multiplying inside your company. Without governance, observability, and identity controls, organizations risk trading short-term productivity for long-term crises. The companies that succeed won't be the fastest to deploy agents. They'll be the ones that deploy them responsibly, with architectures built for scale, safety, and trust. The choice is clear: wake up now, or keep sleepwalking until the wake-up call comes in the form of a breach, a blown budget, or a board-level crisis. Gravitee is hosting an A2A Summit for leaders navigating agentic AI on November 6, 2025, in NYC, in partnership with The Linux Foundation. The event will explore the future of agent-to-agent (A2A) orchestration and autonomous enterprise systems, bringing together technology leaders from Gartner, Google, McDonald's, Microsoft and others to provide actionable insights to help organizations tackle agent sprawl and unlock the full potential of AI-driven decision-making. Learn more here. Rory Blundell is CEO at Gravitee. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they're always clearly marked. For more information, contact [email protected].
Share
Share
Copy Link
As businesses rapidly deploy AI agents across various functions, they face new cybersecurity challenges. The explosive growth of non-human identities outpaces security measures, exposing organizations to potential threats and vulnerabilities.
The modern workforce is undergoing a significant transformation as organizations increasingly deploy artificial intelligence (AI) agents across various business functions. These autonomous AI systems can make decisions and complete complex tasks with minimal human supervision, revolutionizing areas such as development, sales, customer service, research, content creation, and finance
1
.The adoption of AI agents is accelerating at an unprecedented rate. By the end of this year, the volume of non-human and agentic identities is expected to exceed 45 billion – more than 12 times the approximate number of humans in the global workforce
1
. This explosive growth is reshaping the enterprise landscape, with AI agents showing up in various departments, including customer support, IT operations, HR, and finance2
.Despite the rapid adoption of AI agents, organizations are ill-prepared to manage the associated security risks. According to an Okta survey of 260 executives, only 10% report having a well-developed strategy for managing their non-human and agentic identities
1
. This lack of preparedness is concerning, given that 80% of breaches involve some form of compromised or stolen identity.The advent of AI agents introduces a new layer of complexity to the enterprise security landscape. Trained on valuable and potentially sensitive company data, these agents can become new attack vectors if not properly secured and managed
1
. The more access an AI agent has, the easier it becomes for threat actors to infiltrate a company, potentially leading to data leaks, unauthorized actions, or full system compromises.Managing the identity of an AI agent differs fundamentally from managing that of a human user. Unlike human employees who can use passwords, biometrics, or multi-factor authentication (MFA) to validate their identity, AI agents rely on application programming interface (API) tokens or cryptographic certificates for authentication
1
.AI agents also have dynamic lifespans, requiring extremely specific permissions for limited periods and often needing access to sensitive information. This necessitates rapid provisioning and de-provisioning of access, making it crucial for security teams to govern AI agents and their permissions carefully
1
.Related Stories
To address these challenges, businesses need to adopt new security approaches. An identity security fabric is proposed as a solution to secure every identity – human, non-human, and agentic – across all identity use cases, applications, and resources
1
.Organizations should also focus on establishing proper identity and security protocols from the outset of their AI agent deployment. This includes implementing governance measures, creating discoverable APIs, developing event-driven architecture, and setting up proactive controls such as rate limits, analytics, and monitoring
2
.As AI agents proliferate within organizations, there's a risk of "agent sprawl" – the uncontrolled multiplication of AI agents across various systems. This can lead to overlaps, redundancies, and runaway API calls, potentially overloading systems and causing cloud bills to skyrocket
2
.To mitigate these risks, companies need to implement centralized AI Agent Management systems. This includes AI Gateways to control agent API calls and enforce security rules, Agent Catalogs to maintain a searchable directory of all agents and their permissions, and robust monitoring and observability dashboards to track agent activity and flag unusual behavior
2
.Summarized by
Navi
[1]
[2]