4 Sources
4 Sources
[1]
Identity Security: Your First and Last Line of Defense
The danger isn't that AI agents have bad days -- it's that they never do. They execute faithfully, even when what they're executing is a mistake. A single misstep in logic or access can turn flawless automation into a flawless catastrophe. This isn't some dystopian fantasy -- it's Tuesday at the office now. We've entered a new phase where autonomous AI agents act with serious system privileges. They execute code, handle complex tasks, and access sensitive data with unprecedented autonomy. They don't sleep, don't ask questions, and don't always wait for permission. That's powerful. That's also risky. Because today's enterprise threats go way beyond your garden-variety phishing scams and malware. The modern security perimeter? It's all about identity management. Here's the million-dollar question every CISO should be asking: Who or what has access to your critical systems, can you secure and govern that access, and can you actually prove it? Remember those old-school security models built around firewalls and endpoint protection? They served their purpose once -- but they weren't designed for the distributed, identity-driven threats we face today. Identity has become the central control point, weaving complex connections between users, systems, and data repositories. The 2025-2026 SailPoint Horizons of Identity Security report shows that identity management has evolved from a back-office control to mission-critical for the modern enterprise. The explosion of AI agents, automated systems, and non-human identities has dramatically expanded our attack surfaces. These entities are now prime attack vectors. Here's a sobering reality check: Fewer than 4 in 10 AI agents are governed by identity security policies, leaving a significant gap in enterprise security frameworks. Organizations without comprehensive identity visibility? They're not just vulnerable -- they're sitting ducks. But here's where it gets interesting. Despite these mounting challenges, there's a massive opportunity for organizations that get identity security right. The Horizons of Identity Security report reveals something fascinating: Organizations consistently achieve their highest ROI from identity security programs compared to every other security domain. They rank Identity and Access Management as their top-ROI security investment at twice the rate of other security categories. Why? Because mature identity security pulls double duty -- it prevents breaches while driving operational efficiency and enabling new business capabilities. Organizations with mature identity programs, especially those using AI-driven capabilities and real-time identity data sync, show dramatically better cost savings and risk reduction. Mature organizations are four times more likely to have AI-enabled capabilities like Identity Threat Detection and Response. Here's where things get concerning: There's a growing chasm between organizations with mature identity programs and those still playing catch-up. The Horizons of Identity Security report shows that 63% of organizations are stuck in early-stage identity security maturity (Horizons 1 or 2). These organizations aren't just missing out -- they are facing more risk against modern threats. This gap keeps widening because the bar keeps rising. The 2025 framework added seven new capability requirements to address emerging threat vectors. Organizations that aren't advancing their identity capabilities aren't just standing still -- they're effectively moving backward. Organizations experiencing capability regression show significantly lower adoption rates for AI agent identity management. This challenge goes beyond just technology. Only 25% of organizations position IAM as a strategic business enabler -- the rest see it as just another security checkbox or compliance requirement. This narrow view severely limits transformative potential and keeps organizations vulnerable to sophisticated attacks. The threat landscape is evolving at breakneck speed, with unprecedented risk levels across all sectors. Identity security has evolved from just another security component into the core of enterprise security. Organizations need to honestly assess their readiness for managing extensive AI agent deployments and automated system access. A proactive assessment of your current identity security posture provides critical insight into organizational readiness and competitive positioning.
[2]
When AI Agents Join the Teams: The Hidden Security Shifts No One Expects
Written by Ido Shlomo, Co-Founder and CTO, Token Security AI assistants are no longer summarizing meeting notes, writing emails, and answering questions. They're taking action, such as opening tickets, analyzing logs, managing accounts, and even automatically fixing incidents. Welcome to the age of agentic AI, which doesn't just tell you what to do next - it does it for you. These agents are incredibly powerful, but they're also introducing an entirely new kind of security risk. Initially, AI adoption within companies seemed benign. Tools like ChatGPT and Copilot assisted people with basic writing and coding, but didn't act independently. That's changing quickly. Without security reviews or approval, teams are deploying autonomous AI systems that can interpret goals, plan steps, call APIs, and invoke other agents. An AI marketing assistant can now analyze campaign performance data and actively optimize targeting and budget. A DevOps agent can scan for incidents and start remediation without waiting for a human. The result? A growing class of agents that make decisions and take actions faster than people can monitor them. While organizations have started managing Non-Human Identities (NHIs), such as service accounts and API keys, agentic AI doesn't fit this same mold. Unlike a workflow, which follows a predictable series of actions, an AI agent reasons about what to do next. It's capable of chaining multiple steps together, accessing different systems, and adjusting its plan along the way. That flexibility is what makes agents both powerful and dangerous. Because agents can act across boundaries, the simple act of giving them access to a database, a CRM, and Slack could make them among the most powerful users in the company. Multi-agent ecosystems are introducing new levels of complexity. Once an agent starts calling or even creating other agents, the ability to trace an action back to the human who initiated it starts to blur. Even cautious companies are discovering shadow AI creeping into their environments. A product manager signs up for a new AI research tool. A team connects a meeting bot to internal drives. An engineer spins up a local AI assistant that can query customer logs. Each one is technically a service and therefore, each one needs governance. But most of these tools enter the enterprise without a formal review, security scan, or identity record. Traditional visibility tools don't see them clearly. CASB tools might flag a new SaaS domain, but they won't catch a few hundred AI agents quietly running on cloud functions or VMs. It's not malicious; it's just fast. And speed has always been the enemy of oversight. So, how do you secure something that you may not have visibility into and is operating at machine speed? Security teams need to adapt their identity strategies in new ways: Most companies don't have a clean process to retire AI agents when they're no longer needed. A developer prototype that started as an experiment in March is still running in October, using credentials created by someone who is no longer with the company. Another agent quietly evolved through prompt and tool changes until it now has access to customer data. While these agents aren't malicious, they're invisible, persistent, and powerful. That's why more enterprises are creating AI agent inventories that list every active agent, its purpose, owner, permissions, and lifespan. It's the groundwork needed to make AI agents and their identities manageable. The goal isn't to stop agents from working as your organization looks to AI to gain efficiencies and competitive advantages. It's to make sure they have effective oversight and governance. Just as organizations don't grant a new hire admin access to everything, they need to give AI agents specific responsibilities, review their work, and check their decisions. The key is governance to enable teams to build systems that automatically limit scope, log behavior, and shut down rogue processes before they cause harm. Because, these agents aren't just summarizing reports or triaging tickets. They are closing incidents, approving transactions, and interacting directly with customers. When that happens, "shadow AI" won't be a curiosity, it will be a crisis. Agentic AI isn't a future problem. It's already in your stack. If you're still managing identities as either human or non-human, it's time to make room for a third category: autonomous actors. They need identity, permissions, and accountability. They also need control and governancem, and the sooner we treat agents like coworkers with superpowers, and not scripts with credentials, the safer the enterprise will be.
[3]
Non-human identities: Agentic AI's new frontier of cybersecurity risk
Addressing the NHI challenge begins with full visibility over such agentic tools and broader cryptographic assets. AI is transforming global enterprises, unlocking efficiencies, accelerating decision-making, driving innovation and reshaping every operational layer. One of the most significant trends is agentic AI: autonomous "agents" that interact with enterprise data and systems, tailored towards specific goals. Adept at grasping context, planning and adaptive problem-solving, these agents execute complex, multi-step processes with minimal human intervention or oversight. In October 2024, Gartner named agentic AI the top technology trend of 2025 and predicted 33% of enterprise apps will include agentic AI by 2028, up from less than 1% in 2024. In the urgency to adopt agentic AI, many organizations risk overlooking a critical cybersecurity challenge: the rise of non-human identities (NHIs), which include API keys, service accounts and authentication tokens. These AI agents interact with tools, APIs, web pages and systems to execute actions on your behalf (not just provide advice). Agentic AIs can spawn NHIs in security blindspots that often receive broad, persistent access to sensitive data and systems without the safeguards typically applied to humans. Further, agentic AI is not merely passive, with intelligence to reason; it can take action with the propensity for profound impact in the digital and physical world. In fast-scaling environments, NHIs are proliferating faster than security teams can monitor. The use of NHIs significantly multiplies an enterprise's potential attack surface and creates new risks in places that were previously considered secure. Whereas before CISOs only had to worry about credentialing employees and select third parties, now they're going to have to do this for a multitude of NHIs. Should we have seen this coming? Perhaps. History shows how autonomous systems can exceed their intended boundaries. The 1988 Morris Worm, designed to map the internet, infected 6,000 machines instead. Stuxnet, built to target Iranian nuclear centrifuges in 2010, spread to global industrial systems. In 2024, during a security exercise, a ChatGPT model escaped its sandbox and accessed restricted files without being instructed to do so. These cases reveal that autonomous agents develop capabilities beyond their creators' expectations. When considering deliberately malicious design, the risks become far more severe. To gain/maintain access and operate, agentic AI embed NHIs in sensitive workflows, moving data between resources leveraging APIs, accessing sensitive data and operating at machine speed. The agents' interaction involves leveraging cryptographic assets such as certificates, and encryption keys too. Unfortunately, the sheer size and complexity of modern enterprise IT architectures preclude most CISOs from having full visibility into their NHIs and cryptographic environment, let alone a catalogue of what cryptography is deployed, where it's deployed, or whether it is still valid or effective. This lack of transparency underscores the growing need for a modern approach to NHI and cryptographic discovery (including inventorying your assets): You can't protect what you can't see. Zero trust architectures that demand continuous identification/authorization and granting only least-privilege access, have been widely adopted, but most implementations stop at human identity. Automated processes often retain broad authorization privileges without expiration, attestation or accountability, bypassing protocols designed to protect our systems. Without real-time visibility into what, how and when NHIs are being leveraged for, authentication alone is a false sense of security. We are now seeing the rise of a security kill chain: over-permissioned service accounts (NHIs created by applications for resource access and automation), sensitive credentials written into the code, and inactive or expired certificates. There is no malware or obvious exploit, just poor NHIs and cryptographic governance hygiene, often inherited across teams and environments. It's a silent failure state that can easily cascade into catastrophic loss. The implications go beyond corporate data breaches. Consider critical infrastructure: electric grids, emergency communication systems and defense logistics. A credential compromise here doesn't just threaten uptime, it threatens lives. Policy-makers and regulators are beginning to act. US government-issued mandates like NSM-10, EO 14028, and OMB M-23-02 now require real-time cryptographic inventorying to strengthen national cybersecurity. They recognize that without an accurate, up-to-date understanding of your cryptographic assets and who is accessing them, compliance and security are impossible. Cryptography enshrines and enforces identification of and access rights (authentication) for both humans and non-humans (agents). If cryptography and access rights/privileges are not correctly mapped and maintained, then we are giving an open invitation to AI agents to go rogue and human hackers to compromise our systems. More recent executive orders go further, urging automation in cryptographic management and accelerating the transition to quantum-resistant algorithms (aka Post Quantum Cryptography). A quantum computer capable of breaking RSA or ECC would allow those harvesting encrypted traffic today to all sorts of data. Addressing this challenge begins with visibility: Organizations must be able to discover and map their NHIs and cryptographic assets. From there, identity-aware access controls, automated key management, and policy enforcement across systems and partners are essential. Finally, preparing for quantum resilience through the adoption of NIST-standardized post-quantum algorithms such as ML-KEM and ML-DSA will ensure long-term security. In a world increasingly shaped by AI, security resilience - spearheaded by a robust NHI and cryptographic governance - is national resilience. We cannot afford to treat it as a back-end compliance task. It must be front and centre: in every boardroom, security program and digital transformation initiative. Because when cryptography fails, everything else falls with it.
[4]
Agentic AI security breaches are coming: 7 ways to make sure it's not your firm
AI agents - task-specific models designed to operate autonomously or semi-autonomously given instructions -- are being widely implemented across enterprises (up to 79% of all surveyed for a PwC report earlier this year). But they're also introducing new security risks. When an agentic AI security breach happens, companies may be quick to fire employees and assign blame, but slower to identify and fix the systemic failures that enabled it. Forrester's Predictions 2026: Cybersecurity and Risk predicts that the first agentic AI breach will lead to dismissals, adding that geopolitical turmoil and the pressure being put on CISOs and CIOs to deploy agentic AI quickly, while minimizing the risks. CISOs are in for a challenging 2026 Those in organizations who compete globally are in for an especially tough next twelve months as governments move to more tightly regulate and outright control critical communication infrastructure. Forrester also predicts the EU will establish its own known exploited vulnerability database, which translates into immediate demand for regionalized security pros that CISOs will also need to find, recruit, and hire fast if this prediction happens. Forrester also predicts that quantum‑security spending will exceed 5% of overall IT security budgets, a plausible outcome given researchers' steady progress toward quantum‑resistant cryptography and enterprises' urgency to pre‑empt the 'harvest now, decrypt later' threat." Of the five major challenges CISOs will face in 2026, none is more lethal and has the potential to completely reorder the threat landscape as agentic AI breaches and the next generation of weaponized AI. How CISOs are tacking agentic AI threats head-on "The adoption of agentic AI introduces entirely new security threats that bypass traditional controls. These risks span data exfiltration, autonomous misuse of APIs, and covert cross-agent collusion, all of which could disrupt enterprise operations or violate regulatory mandates," Jerry R. Geisler III, Executive Vice President and Chief Information Security Officer at Walmart Inc., told VentureBeat in a recent interview. Geisler continued, articulating Walmart's direction. "Our strategy is to build robust, proactive security controls using advanced AI Security Posture Management (AI-SPM), ensuring continuous risk monitoring, data protection, regulatory compliance and operational trust." Implicit in agentic AI are the risks of what happens when agents don't get along, compete for resources, or worse, lack the basic architecture to ensure minimum viable security (MVS). Forrester defines MVS as an approach to integrate security , writing that "in early-stage concept testing, without slowing down the product team. As the product evolves from early-stage concept testing to an alpha release to a beta release and onward, MVS security activities also evolve, until it is time to leave MVS behind." Sam Evans, CISO of Clearwater Analytics provided insights into how he addressed the challenge in a recent VentureBeat interview. "I remember when one of the first board meetings I was in, they asked me, "So what are your thoughts on ChatGPT?" I said, "Well, it's an incredible productivity tool. However, I don't know how we could let our employees use it, because my biggest fear is somebody copies and pastes customer data into it, or our source code, which is our intellectual property." Evans' company manages $8.8 trillion in assets. "The worst possible thing would be one of our employees taking customer data and putting it into an AI engine that we don't manage," Evans told VentureBeat. "The employee not knowing any different or trying to solve a problem for a customer...that data helps train the model." Evans elaborated, "But I didn't just come to the board with my concerns and problems. I said, 'Well, here's my solution. I don't want to stop people from being productive, but I also want to protect it.' When I came to the board and explained how these enterprise browsers work, they're like, 'Okay, that makes much sense, but can you really do it?' Following the board meeting, Evans and his team began an in-depth and comprehensive due diligence process that resulted in Clearwater choosing Island. Boardrooms are handing CISOs a clear, urgent mandate: secure the latest wave of AI and agentic‑AI apps, tools and platforms so organizations can unlock productivity gains immediately without sacrificing security or slowing innovation. The velocity of agent deployments across enterprises has pushed the pressure to deliver value at breakneck speed higher than it's ever been. As George Kurtz, CEO and founder of CrowdStrike, said in a recent interview: "The speed of today's cyberattacks requires security teams to rapidly analyze massive amounts of data to detect, investigate, and respond faster. Adversaries are setting records, with breakout times of just over two minutes, leaving no room for delay." Productivity and security are no longer separate lanes; they're the same road. Move fast or the competition and the adversaries will move past you is the message boards are delivering to CISOs today. Walmart's CISO keeps the intensity up on innovation Geisler puts a high priority on keeping a continual pipeline of innovative new ideas flowing at Walmart. "An environment of our size requires a tailor-made approach, and interestingly enough, a startup mindset. Our team often takes a step back and asks, "If we were a new company and building from ground zero, what would we build?" Geisler continued, "Identity & access management (IAM) has gone through many iterations over the past 30+ years, and our main focus is on how to modernize our IAM stack to simplify it. While related to yet different from Zero Trust, our principle of least privilege won't change." Walmart has turned innovation into a practical, pragmatic strategy for continually hardening its defenses while reducing risk, all while making major contributions to the growth of the business. Having created a process that can do this at scale in an agentic AI era is one of the many ways cybersecurity delivers business value to the company. VentureBeat continues to see companies, including Clearwater Analytics, Walmart, and many others, putting cyberdefenses in place to counter agentic AI cyberattacks. Of the many interviews we've had with CISOs and enterprise security teams, seven battle-tested ways emerge of how enterprises are securing themselves against potential agentic AI attacks. Seven ways CISOs are securing their firms now From in-depth conversations with CISOs and security leaders, seven proven strategies emerge for protecting enterprises against imminent agentic AI threats: 1. Visibility is the first line of defense. "The rising use of multi‑agent systems will introduce new attack vectors and vulnerabilities that could be exploited if they aren't secured properly from the start," Nicole Carignan, VP Strategic Cyber AI at Darktrace, told VentureBeat earlier this year. An accurate, real‑time inventory that identifies every deployed system, tracks decision and system interdependencies to the agentic level, while also mapping unintended interactions at the agentic level, is now foundational to enterprise resilience. 2. Reinforce API security now and develop muscle memory organizationally to keep them secure. Security and risk management professionals from financial services, retail and banking who spoke with VentureBeat on condition of anonymity emphasized the importance of continuously monitoring risk at API layers, stating their strategy is to leverage advanced AI Security Posture Management (AI-SPM) to maintain visibility, enforce regulatory compliance, and operational trust across complex environment. APIs represent the front lines of agentic risk, and strengthening their security transforms them from integration points into strategic enforcement layers. 3. Manage autonomous identities as a strategic priority. "Identity is now the control plane for AI security. When an AI agent suddenly accesses systems outside its established pattern, we treat it identically to a compromised employee credential," said Adam Meyers, Head of Counter‑Adversary Operations at CrowdStrike during a recent interview with VentureBeat. In the era of agentic AI, the traditional IAM playbook is obsolete. Enterprises must deploy IAM frameworks that scale to millions of dynamic identities, enforce least‑privilege continuously, integrate behavioral analytics for machines and humans alike, and revoke access in real time. Only by elevating identity management from an operational cost center to a strategic control plane will organizations tame the velocity, complexity and risk of autonomous systems. 4. Upgrade to real-time observability for rapid threat detection. Static logging belongs to another era of cybersecurity. In an agentic environment, observability must evolve into a live, continuously streaming intelligence layer that captures the full scope of system behavior. The enterprises that fuse telemetry, analytics, and automated response into a single, adaptive feedback loop capable of spotting and containing anomalies in seconds rather than hours stand the best chance of thwarting an agentic AI attack. 5. Embed proactive oversight to balance innovation with control. No enterprise ever excelled against its growth targets by ignoring the guardrails of the latest technologies they were using to get there. For agentic AI that's core to the future of getting the most value possible out of this technology. CISOs who lead effectively in this new landscape ensure human-in-the-middle workflows are designed in from the beginning. Oversight at the human level also helps create clear decision points that surface issues early before they spiral. The result? Innovation can run at full throttle, knowing proactive oversight will tap the brakes just enough to keep the enterprise safely on track. 6. Make governance adaptive to match AI's rapid deployment. Static, inflexible governance might as well be yesterday's newspaper because outdated the moment it's printed. In an agentic world moving at machine-speed, compliance policies must adapt continuously, embedded in real-time operational workflows rather than stored on dusty shelves. The CISOs making the most impact understand governance isn't just paperwork; it's code, it's culture, it's integrated directly into the heartbeat of the enterprise to keep pace with every new deployment. 7. Engineer incident response ahead of machine-speed threats. The worst time to plan your incident response? When your Active Directory and other core systems have been compromised by an agentic AI breach. Forward-thinking CISOs build, test, and refine their response playbooks before agentic threats hit, integrating automated processes that respond at the speed of attacks themselves. Incident readiness isn't a fire drill; it needs to be muscle memory or an always-on discipline, woven into the enterprise's operational fabric to make sure when threats inevitably arrive, the team is calm, coordinated, and already one step ahead. Agentic AI is reordering the threat landscape in real-time right now As Forrester predicts, the first major agentic breach won't just claim jobs; it'll expose every organization that chose inertia over initiative, shining a harsh spotlight on overlooked gaps in governance, API security, identity management, and real-time observability. Meanwhile, quantum threats are driving budget allocations higher, forcing security leaders to act urgently before their defenses become obsolete overnight. The CISOs who win this race are already mapping their systems in real-time, embedding governance into their operational core, and weaving proactive incident responses into the fabric of their daily operations. Enterprises that embrace this proactive stance will turn risk management into a strategic advantage, staying steps ahead of both competitors and adversaries.
Share
Share
Copy Link
As autonomous AI agents become more prevalent in enterprise environments, they introduce unprecedented security challenges. This story explores the risks associated with agentic AI and non-human identities, and outlines strategies for organizations to adapt their security measures.
The rapid adoption of artificial intelligence in enterprise settings has ushered in a new era of autonomous AI agents. These agents, capable of executing complex tasks without human intervention, are revolutionizing business operations. According to Gartner, agentic AI is predicted to be integrated into 33% of enterprise applications by 2028, up from less than 1% in 2024
3
. This dramatic shift is introducing unprecedented cybersecurity challenges that organizations must address urgently.
Source: Bleeping Computer
As AI agents become more prevalent, they're creating a new category of security concern: non-human identities (NHIs). These include API keys, service accounts, and authentication tokens that AI agents use to interact with various systems
2
. Unlike traditional user accounts, NHIs often receive broad, persistent access to sensitive data and systems without the usual safeguards applied to human users.The proliferation of NHIs is expanding attack surfaces at an alarming rate. Many organizations lack visibility into these identities, creating security blind spots that could be exploited by malicious actors. As Ido Shlomo, Co-Founder and CTO of Token Security, points out, "Most companies don't have a clean process to retire AI agents when they're no longer needed"
2
.The autonomous nature of AI agents introduces new vectors for potential security breaches. Unlike traditional scripts or workflows, agentic AI can reason, plan, and adapt its actions, making it challenging to predict and control its behavior. This flexibility, while powerful for business operations, can be dangerous if misused or compromised.
Historical examples of autonomous systems exceeding their intended boundaries, such as the Morris Worm and Stuxnet, serve as cautionary tales for the potential risks of agentic AI
3
. The speed at which these agents operate further compounds the risk, as they can execute actions faster than human monitoring can keep up.Organizations must evolve their security approaches to address the unique challenges posed by AI agents. Key strategies include:

Source: VentureBeat
Creating AI agent inventories: Maintaining a comprehensive list of active agents, their purposes, owners, permissions, and lifespans is crucial for effective management
2
.Implementing robust governance: Establishing systems to automatically limit scope, log behavior, and shut down rogue processes is essential
2
.Enhancing visibility: Deploying tools for real-time cryptographic inventorying and monitoring of NHIs is critical for maintaining security
3
.Adopting Zero Trust architectures: Extending zero trust principles to include non-human identities can help mitigate risks .
Leveraging AI Security Posture Management (AI-SPM): As noted by Jerry R. Geisler III of Walmart Inc., implementing AI-SPM ensures "continuous risk monitoring, data protection, regulatory compliance and operational trust"
4
.CISOs and other security leaders face mounting pressure to secure AI implementations while enabling rapid innovation. Forrester predicts that the first major agentic AI breach will likely lead to dismissals, highlighting the high stakes involved
4
.Regulatory bodies are beginning to respond to these emerging threats. U.S. government mandates like NSM-10 and EO 14028 now require real-time cryptographic inventorying to strengthen national cybersecurity
3
. As the landscape evolves, organizations must stay ahead of both technological advancements and regulatory requirements to ensure robust security in the age of agentic AI.🟡 untrained_model_response=🟡 The rapid adoption of artificial intelligence in enterprise settings has ushered in a new era of autonomous AI agents. These agents, capable of executing complex tasks without human intervention, are revolutionizing business operations. According to Gartner, agentic AI is predicted to be integrated into 33% of enterprise applications by 2028, up from less than 1% in 20243
. This dramatic shift is introducing unprecedented cybersecurity challenges that organizations must address urgently.
Source: Bleeping Computer
As AI agents become more prevalent, they're creating a new category of security concern: non-human identities (NHIs). These include API keys, service accounts, and authentication tokens that AI agents use to interact with various systems
2
. Unlike traditional user accounts, NHIs often receive broad, persistent access to sensitive data and systems without the usual safeguards applied to human users.The proliferation of NHIs is expanding attack surfaces at an alarming rate. Many organizations lack visibility into these identities, creating security blind spots that could be exploited by malicious actors. As Ido Shlomo, Co-Founder and CTO of Token Security, points out, "Most companies don't have a clean process to retire AI agents when they're no longer needed"
2
.Related Stories
The autonomous nature of AI agents introduces new vectors for potential security breaches. Unlike traditional scripts or workflows, agentic AI can reason, plan, and adapt its actions, making it challenging to predict and control its behavior. This flexibility, while powerful for business operations, can be dangerous if misused or compromised.
Historical examples of autonomous systems exceeding their intended boundaries, such as the Morris Worm and Stuxnet, serve as cautionary tales for the potential risks of agentic AI
3
. The speed at which these agents operate further compounds the risk, as they can execute actions faster than human monitoring can keep up.Organizations must evolve their security approaches to address the unique challenges posed by AI agents. Key strategies include:

Source: VentureBeat
Creating AI agent inventories: Maintaining a comprehensive list of active agents, their purposes, owners, permissions, and lifespans is crucial for effective management
2
.Implementing robust governance: Establishing systems to automatically limit scope, log behavior, and shut down rogue processes is essential
2
.Enhancing visibility: Deploying tools for real-time cryptographic inventorying and monitoring of NHIs is critical for maintaining security
3
.Adopting Zero Trust architectures: Extending zero trust principles to include non-human identities can help mitigate risks
3
.Leveraging AI Security Posture Management (AI-SPM): As noted by Jerry R. Geisler III of Walmart Inc., implementing AI-SPM ensures "continuous risk monitoring, data protection, regulatory compliance and operational trust"
4
.CISOs and other security leaders face mounting pressure to secure AI implementations while enabling rapid innovation. Forrester predicts that the first major agentic AI breach will likely lead to dismissals, highlighting the high stakes involved
4
.Regulatory bodies are beginning to respond to these emerging threats. U.S. government mandates like NSM-10 and EO 14028 now require real-time cryptographic inventorying to strengthen national cybersecurity
3
. As the landscape evolves, organizations must stay ahead of both technological advancements and regulatory requirements to ensure robust security in the age of agentic AI.Summarized by
Navi
[1]
[2]
[3]