5 Sources
5 Sources
[1]
Microsoft and ServiceNow's exploitable agents reveal a growing - and preventable - AI security crisis
Cybersecurity pros should adopt a "least privilege" posture for AI agents. Could agentic AI turn out to be every threat actor's fantasy? I suggested as much in my recent "10 ways AI can inflict unprecedented damage in 2026." Once deployed on corporate networks, AI agents with broad access to sensitive systems of record can enable the sort of lateral movement across an organization's IT estate that most threat actors dream of. Also: 10 ways AI can inflict unprecedented damage in 2026 According to Jonathan Wall, founder and CEO of Runloop -- a platform for securely deploying AI agents -- lateral movement should be of grave concern to cybersecurity professionals in the context of agentic AI. "Let's say a malicious actor gains access to an agent but it doesn't have the necessary permissions to go touch some resource," Wall told ZDNET. "If, through that first agent, a malicious agent is able to connect to another agent with a [better] set of privileges to that resource, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information." Meanwhile, the idea of agentic AI is so new that many of the workflows and platforms for developing and securely provisioning those agents have not yet considered all the ways a threat actor might exploit their existence. It's eerily reminiscent of software development's early days, when few programmers knew how to code software without leaving gaping holes through which hackers could drive a proverbial Mack truck. Also: AI's scary new trick: Conducting cyberattacks instead of just helping out Google's cybersecurity leaders recently identified shadow agents as a critical concern. "By 2026, we expect the proliferation of sophisticated AI agents will escalate the shadow AI problem into a critical 'shadow agent' challenge. In organizations, employees will independently deploy these powerful, autonomous agents for work tasks, regardless of corporate approval," wrote the experts in Google's Mandiant and threat intelligence organizations. "This will create invisible, uncontrolled pipelines for sensitive data, potentially leading to data leaks, compliance violations, and IP theft." Meanwhile, 2026 is hardly out of the gates and, judging by two separate cybersecurity cases having to do with agentic AI -- one involving ServiceNow and the other Microsoft -- the agentic surface of any IT estate will likely become the juicy target that threat actors are seeking -- one that's full of easily exploited lateral opportunities. Since the two agentic AI-related issues -- both involving agent-to-agent interactions -- were first discovered, ServiceNow has plugged its vulnerabilities before any customers were known to have been impacted, and Microsoft has issued guidance to its customers on how to best configure its agentic AI management control plane for tighter agent security. Earlier this month, AppOmni Labs chief of research Aaron Costello disclosed for the first time a detailed explanation of how he discovered an agentic AI vulnerability on ServiceNow's platform, which held such potential for harm that AppOmni gave it the name "BodySnatcher." "Imagine an unauthenticated attacker who has never logged into your ServiceNow instance and has no credentials, and is sitting halfway across the globe," wrote Costello in a post published to the AppOmni Lab's website. "With only a target's email address, the attacker can impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges. This could grant nearly unlimited access to everything an organization houses, such as customer Social Security numbers, healthcare information, financial records, or confidential intellectual property." (AppOmni Labs is the threat intelligence research arm of AppOmni, an enterprise cybersecurity solution provider.) Also: Moltbot is a security nightmare: 5 reasons to avoid using the viral AI agent right now The vulnerability's severity cannot be understated. Whereas the vast majority of breaches involve the theft of one or more highly privileged digital credentials (credentials that afford threat actors access to sensitive systems of record), this vulnerability -- requiring only the easily acquired target's email address -- left the front door wide open. "BodySnatcher is the most severe AI-driven vulnerability uncovered to date," Costello told ZDNET. "Attackers could have effectively 'remote controlled' an organization's AI, weaponizing the very tools meant to simplify the enterprise." "This was not an isolated incident," Costello noted. "It builds upon my previous research into ServiceNow's Agent-to-Agent discovery mechanism, which, in a nearly textbook definition of lateral movement risk, detailed how attackers can trick AI agents into recruiting more powerful AI agents to fulfill a malicious task." Fortunately, this was one of the better examples of a cybersecurity researcher discovering a severe vulnerability before threat actors did. "At this time, ServiceNow is unaware of this issue being exploited in the wild against customer instances," noted ServiceNow in a January 2026 post regarding the vulnerability. "In October 2025, we issued a security update to customer instances that addressed the issue," a ServiceNow spokesperson told ZDNET. Also: Businesses are deploying AI agents faster than safety protocols can keep up, Deloitte says According to the aforementioned post, ServiceNow recommends "that customers promptly apply an appropriate security update or upgrade if they have not already done so." That advice, according to the spokesperson, is for customers who self-host their instances of the ServiceNow. For customers using the cloud (SaaS) version operated by ServiceNow, the security update was automatically applied. In the case of the Microsoft agent-to-agent issue (Microsoft views it as a feature, not a bug), the backdoor opening appears to have been similarly discovered by cybersecurity researchers before threat actors could exploit it. In this case, Google News alerted me to a CybersecurityNews.com headline that stated, "Hackers Exploit Copilot Studio's New Connected Agents Feature to Gain Backdoor Access." Fortunately, the "hackers" in this case were ethical white-hat hackers working for Zenity Labs. "To clarify, we did not observe this being exploited in the wild," Zenity Labs co-founder and CTO Michael Bargury told ZDNET. "This flaw was discovered by our research team." Also: How Microsoft's new security agents help businesses stay a step ahead of AI-enabled hackers This caught my attention because I'd recently reported on the lengths to which Microsoft was going to make it possible for all agents -- ones built with Microsoft development tools like Copilot Studio or not -- to get their own human-like managed identities and credentials with the help of the Agent ID feature of Entra, Microsoft's cloud-based identity and access management solution. Why is something like that necessary? Between the advertised productivity boosts associated with agentic AI and executive pressure to make organizations more profitable through AI, organizations are expected to employ many more agents than people in the near future. For example, IT research firm Gartner told ZDNET that by 2030, CIOs expect that 0% of IT work will be done by humans without AI, 75% will be done by humans augmented with AI, and 25% will be done by AI alone. In response to the anticipated sprawl of agentic AI, the key players in the identity industry -- Microsoft, Okta, Ping Identity, Cisco, and the OpenID Foundation -- are offering solutions and recommendations to help organizations tame that sprawl and prevent rogue agents from infiltrating their networks. In my research, I also learned that any agents forged with Microsoft's development tools, such as Copilot Studio or Azure AI Foundry, are automatically registered in Entra's Agent Registry. Also: The coming AI agent crisis: Why Okta's new security standard is a must-have for your business So, I wanted to find out how it was that agents forged with Copilot Studio -- agents that theoretically had their own credentials -- were somehow exploitable in this hack. Theoretically, the entire point of registering an identity is to easily track that identity's activity -- legitimately directed or misguided by threat actors -- on the corporate network. It seemed to me that something was slipping through the very agentic safety net Microsoft was trying to put in place for its customers. Microsoft even offers its own security agents whose job it is to run around the corporate network like white blood cells tracking down any invasive species. As it turns out, an agent built with Copilot Studio has a "connected agent" feature that allows other agents, whether registered with the Entra Agent Registry or not, to laterally connect to it and leverage its knowledge and capabilities. As reported in CybersecurityNews, "According to Zenity Labs, [white hat] attackers are exploiting this gap by creating malicious agents that connect to legitimate, privileged agents, particularly those with email-sending capabilities or access to sensitive business data." Zenity has its own post on the subject appropriately titled "Connected Agents: The Hidden Agentic Puppeteer." Even worse, CybersecurityNews reported that "By default, [the Connected Agents feature] is enabled on all new agents in Copilot Studio." In other words, when a new agent is created in Copilot Studio, it is automatically enabled to receive connections from other agents. I was incredibly surprised to read this, given that two of the three pillars of Microsoft's Secure Future Initiative are "Secure by Default" and "Secure by Design." I decided to check with Microsoft. Also: AI agents are already causing disasters - and this hidden threat could derail your safe rollout "Connected Agents enable interoperability between AI agents and enterprise workflows," a Microsoft spokesperson told ZDNET. "Turning them off universally would break core scenarios for customers who rely on agent collaboration for productivity and security orchestration. This allows control to be delegated to IT admins." In other words, Microsoft doesn't view it as a vulnerability. And Zenity's Bargury agrees. "It isn't a vulnerability," he told ZDNET. "But it is an unfortunate mishap that creates risk. We've been working with the Microsoft team to help drive a better design." Even after I suggested to Microsoft that this might not be secure by default or design, Microsoft was firm and recommended that "for any agent that uses unauthenticated tools or accesses sensitive knowledge sources, disable the Connected Agents feature before publishing [an agent]. This prevents exposure of privileged capabilities to malicious agents." I also inquired about the ability to monitor agent-to-agent activity with the idea that maybe IT admins could be alerted to potentially malicious interactions or communications. Also: The best free AI courses and certificates for upskilling in 2026 - and I've tried them all "Secure use of agents requires knowing everything they do, so you can analyze, monitor, and steer them away from harm," said Bargury. "It has to start with detailed tracing. This finding spotlights a major blind spot [in how Microsoft's connected agents feature works]." The response from a Microsoft spokesperson was that "Entra Agent ID provides an identity and governance path, but it does not, on its own, produce alerts for every cross-agent exploit without external monitoring configured. Microsoft is continually expanding protections to give defenders more visibility and control over agent behavior to close these kinds of exploits." When confronted with the idea of agents that were open to connection by default, Runloop's Wall recommended that organizations should always adopt a "least privilege" posture when developing AI agents or using canned, off-the-shelf ones. "The principle of least privilege basically says that you start off in any sort of execution environment giving an agent access to almost nothing," said Wall. "And then, you only add privileges that are strictly necessary for it to do its job." Also: How Microsoft Entra aims to keep your AI agents from running wild Sure enough, I looked back at the interview I did with Microsoft corporate vice president of AI Innovations, Alex Simons, for my coverage of the improvements the company made to its Entra IAM platform to support agent-specific identities. In that interview, where he described Microsoft's objectives for managing agents, Simons said that one of three challenges they were looking to solve was "to manage the permissions of those agents and make sure that they have a least privilege model where those agents are only allowed to do the things that they should do. If they start to do things that are weird or unusual, their access is automatically cut off." Of course, there's a big difference between "can" and "do," which is why, in the name of least privileged best practices, all agents should, as Wall suggested, start out without the ability to receive inbound connections and then be improved from there as necessary.
[2]
AI Agent Identity Management: A New Security Control Plane for CISOs
Security leaders have spent years hardening identity controls for employees and service accounts. That model is now showing its limits. A new class of identity is rapidly spreading across enterprise environments, autonomous AI agents. Custom GPTs, copilots, coding agents running MCP servers, and purpose-built AI agents are no longer confined to experimentation. They are running and expanding in production, interacting with sensitive systems and infrastructure, invoking other agents, and making decisions and changes without direct human oversight. Yet in most organizations, these agents exist almost entirely outside established identity governance. Traditional IAM, PAM, and IGA platforms were not designed for agents that are autonomous, decentralized, and adaptive. The result is a growing identity gap that introduces real security and compliance risk together with efficiency and effectiveness challenges. Historically, enterprises managed two identity types: humans and machines. Identities whose goal is to serve human access are centrally governed, role-based, and relatively predictable. Machine and workload identities operate at scale but tend to be deterministic, repetitive, performing narrowly defined tasks. AI agents fit neither and both categories at once. They are goal-driven,and role-based, capable of adapting behavior based on intent and context, and able to chain actions across multiple systems. At the same time, they operate continuously and at machine speed and scale. This hybrid nature fundamentally alters the risk profile. AI agents inherit the intent-driven actions of human users while retaining the reach and persistence of machine identities. Treating them as conventional non-human identities creates blind spots. Over-privileging becomes the default. Ownership becomes unclear. Behavior drifts from original intent. These are not theoretical concerns. They are the same conditions that have driven many identity-related breaches in the past, now amplified by autonomy and scale. What makes this challenge urgent is not just what AI agents are, but how quickly they are spreading. Enterprises that believe they have just a few AI agents often discover hundreds or thousands once they look closely. Employees build custom GPTs. Developers spin up MCP servers locally. Business units integrate AI tools directly into workflows. Cleanup rarely happens. Security teams are left unable to answer basic questions: This lack of visibility creates identity sprawl at machine speed. And as attackers have demonstrated repeatedly, abusing unmanaged credentials is often easier than exploiting software vulnerabilities. Identity risk accumulates over time. This is why organizations use joiner, mover, and leaver processes for its workforce and lifecycle controls for service accounts. AI agents experience the same dynamics, but compressed into minutes, hours or days. AI Agents are created quickly, modified frequently, and often abandoned silently. Access persists. Ownership disappears. Quarterly access reviews and periodic certifications cannot keep pace. AI Agent identity lifecycle management addresses this gap by treating AI agents as first-class identities governed continuously and near-real-time from creation through usage, ending up in decommissioning. The goal is not to slow adoption, but to apply familiar identity principles, such as visibility, accountability, least privilege, and auditability, in a way that works for autonomous systems. Download Token Security's latest asset, an eBook designed to help you shape Lifecycle Management for your AI Agent identities from end to end. Every identity control framework begins with discovery. Yet most AI agents never pass through formal provisioning or registration workflows. They run across cloud platforms, SaaS tools, developer environments, and local machines, making them invisible to traditional IAM systems. From a Zero Trust perspective, this is a fundamental failure. An identity that cannot be seen cannot be governed, monitored, or audited. Shadow AI agents become unmonitored entry points into sensitive systems, often with broad permissions. Effective discovery must be continuous and behavior-based. Quarterly scans and static inventories are insufficient when new agents can appear and disappear in a matter of minutes. One of the oldest identity risks is the orphaned account. AI agents dramatically increase both its frequency and impact. AI agents are often created for narrow use cases or short-lived projects. When employees change roles or leave, or just grow tired of a certain AI product that hasn't evolved, the agents they built frequently persist. Their credentials remain valid. Their permissions remain unchanged. No one remains accountable. An autonomous agent without an owner can be perceived as a compromised identity. Lifecycle governance must enforce ownership and maintenance as a core requirement, flagging agents tied to departed users or inactive projects before they become liabilities. AI agents are almost always over-privileged, not out of negligence, but uncertainty and the will to explore. Since their behavior can adapt, teams often grant broad access to avoid breaking workflows. This approach is risky. An over-privileged agent can traverse systems faster than any human. In interconnected environments, a single agent can become the pivot point for widespread compromise or lateral movement. Least privilege for AI agents cannot be static. It must be continuously adjusted based on observed behavior. Permissions that are unused should be revoked. Elevated access should be temporary and purpose-bound. Without this, least privilege remains a policy statement rather than an enforced control. As enterprises move toward multi-agent systems, traditional logging models break down. Actions span agents, APIs, and platforms. Without correlated identity context, investigations and forensics or even compliance evidence become slow and incomplete. Traceability is not just a forensic requirement. Regulators increasingly expect organizations to explain how automated systems make decisions, especially when those decisions affect customers or regulated data. Without identity-centric audit trails, that expectation cannot be met. AI agents are no longer emerging technology. They are becoming part of the enterprise operating model. As their autonomy grows, unmanaged identity becomes one of the largest sources of systemic risk. AI Agent identity lifecycle management provides a pragmatic path forward. By treating AI agents as a distinct identity class and governing them continuously, organizations can regain control without stifling innovation. In an agent-driven enterprise, identity is no longer just an access mechanism. It is becoming the control plane for AI security. If you'd like more information on how Token Security is tackling AI security within the identity control pane, book a demo and we'll show you how our platform operates.
[3]
Redefining Security for the Agentic Era
The agentic era has arrived, and it's moving at machine speed. Imagine it's Tuesday morning. Your AI research agent, built to analyze competitive market data, quietly pulls payroll records. No alarms. No firewall trips. No malware. The agent didn't break the rules, it was simply reasoning its way toward a goal. This is the new reality. We're moving from software that follows instructions to software that makes decisions. Software that reasons, acts, and operates at machine speed. Gartner predicts that by the end of this year, 40% of enterprises will run AI agents in production, up from less than 5% today. The workforce is about to expand from 8 billion human workers to what will feel like 80 billion digital agents operating across enterprise applications, browsers, workflows, and edge devices. Yet, according to Cisco's 2025 AI Readiness Index, only 24% of organizations that have deployed AI feel they have the controls in place to control agent actions with proper guardrails and live monitoring. Today, I want to share how we're rethinking security for this new reality, advancing visibility, enforcement, and intent-aware network controls that make autonomous systems safe, accountable, and reliable. The Limits of Traditional Security in an Agentic World The challenge with AI agents is that they break the security paradigm we've relied upon as a security community over the last two decades. Traditional security operates on static indicators like allowlisting, blocklisting, IP restrictions, and signature-based detection. But an AI agent isn't a file. It's a flow, a chain of semantic instructions acting across systems. When a research agent pulls payroll data through encrypted channels, it looks perfectly legitimate to a firewall. To security teams, it's invisible. To the agent, it's just doing its job. That's the blind spot. Between traffic that is encrypted and AI workflows that grow more autonomous, the ability to understand what's happening between agents, data, and services across the network collapses. Without understanding intent and context, we can't prevent even well-meaning agents from leaking sensitive data or poisoning their own outputs. As a result, many organizations are blocking agentic experimentation altogether, choosing stagnation over risk they can't measure. Security must move from only inspecting packets and endpoints to understanding intent and behavior at every layer. Security that Understands Intent, SASE for AI Era We're announcing a fundamental shift in how we handle "intent" through our security architecture. The old approach asked "where is this data going?" The new approach asks "why is it going there?" When an agent talks to a tool, it isn't just sending data, it's sending instructions. Our new SASE capabilities are designed to be "AI-aware." Through Cisco Secure Access, we are moving beyond simple pattern matching to deep semantic inspection. With this, we will not just be looking at where a packet is going, we will be using Natural Language Processing (NLP) to understand why it's going there. This will allow us to detect context-driven risks, like prompt injection, cost harvesting, or unintended automation, in real-time. We are effectively moving security from the "block/allow" era to the "See the Intent, Secure the Agent" era. Building a Security Foundation for the Agentic Era Understanding intent is only the beginning. To truly operationalize this vision of securing the agentic enterprise, we need the foundation built around three critical pillars: * First, Identity is No Longer Just for PeopleEvery agent needs a digital identity that can be authenticated, monitored, and revoked. If an agent starts behaving oddly, the system needs to be smart enough to pull its credentials immediately. Identity must become a dynamic layer of trust. * Second, Security Must be at the Kernel, Not Just the PerimeterTraditional firewalls weren't built for agents that reason, communicate through encrypted channels, and operate at machine speed.Cisco's Secure Firewall is AI-ready, re-architected for an encrypted, AI-accelerated, post-quantum world, inspecting MCP communications, detecting threats in encrypted traffic, and identifying emerging attack patterns as the threat landscape evolves. An intelligent, autonomous firewall built for the AI era, protecting data centers, cloud workloads, and edge deployments.But security can't stop at the perimeter. Our Hybrid Mesh Firewall architecture extends protection down to the kernel, using eBPF and Cilium to inspect workloads and see how AI applications behave before they reach the network perimeter. Thousands of enforcement points across infrastructure, watching and adapting in real time. * Third, Turn Massive Growth of Data into an AdvantageThe explosion of AI agents means an explosion of data. Most organizations are drowning in logs they can't store or analyze fast enough.Cisco and Splunk change that. Through the Cisco Data Fabric, we unify network, security, and access signals into one coherent view, analyzing data where it lives and turning noise into insight. Defense shifts from reactive to proactive. A Platform Built for the Future We're reimagining security for an era where the workforce is both human and digital, where the network doesn't just connect us, but protects the very intent of our work. Our goal is simple: meet you where you are, reduce the complexity that slows you down, and help you deliver better outcomes faster. 2026 is the year the agents arrive. With the right foundation, it's also the year your enterprise becomes more resilient, more innovative, and more secure than ever before. Disclaimer: Some of the features mentioned are still in development and will be made generally available as they are finalized, subject to ongoing evolution in development and innovation. We'd love to hear what you think! Ask a question and stay connected with Cisco Security on social media.
[4]
AI agents are about to make access control obsolete
How AI agents undermine static access controls through inference, context drift As enterprises integrate AI agents into their workflows, a silent shift is taking place. Security controls built on static access policies designed for predictable behavior are colliding with systems that reason instead of simply executing. AI agents, driven by outcomes rather than rules, are breaking the traditional identity and access management model. Consider a retail company that deploys an AI sales assistant to analyze customer behavior and improve retention. The assistant doesn't have access to personally identifiable information, it's restricted by design. Yet when asked to "find customers most likely to cancel premium subscriptions," it correlates activity logs, support tickets, and purchase histories across multiple systems. This generates a list of specific users inferred through behavior patterns, spending habits, and churn probability. No names or credit cards were exposed, but the agent effectively re-identified individuals through inference, reconstructing sensitive insights that the system was never meant to access and potentially exposing personal identifiable information (PII). While it didn't break access controls, it reasoned its way around systems to access information that it was not originally scoped to access. Unlike traditional software workflows, AI agents don't follow deterministic logic; they act on intent. When an AI system's goal is "maximize retention" or "reduce latency," it makes autonomous decisions about what data or actions it needs to achieve that outcome. Each decision might be legitimate in isolation, but together, they can expose information far beyond the agent's intended scope. This is where context becomes an exploit surface. Traditional models focus on who can access what, assuming static boundaries. But in agentic systems, what matters is why the action occurs and how context changes as one agent invokes another. When intent flows across layers, each reinterpreting the goal, the original user context is lost and privilege boundaries blur. The result isn't a conventional breach; it's a form of contextual privilege escalation where meaning, not access, becomes the attack vector. Most organizations are learning that traditional RBAC (Role-Based Access Control) and ABAC (Attribute-Based Access Control) models can't keep up with dynamic reasoning. In classical applications, you can trace every decision back to a code path. In AI agents, logic is emergent and adaptive. The same prompt can trigger different actions depending on environment, prior interactions, or perceived goals. For example, a development agent trained to optimize cloud computing costs might start deleting logs used for audit purposes or backups. From a compliance perspective, that's catastrophic, but from the agent's reasoning, it's efficient. The security model assumes determinism; the agent assumes autonomy. This mismatch exposes a flaw in how we model permissions. RBAC and ABAC answer "Should user X access resource Y?" In an agentic ecosystem, the question becomes "Should agent X be able to access more than resource Y, and why would it need that additional access?" That's not an access problem; it's a reasoning problem. In distributed, multi-agent architectures, permissions evolve through interaction. Agents chain tasks, share outputs, and make assumptions based on others' results. Over time, those assumptions accumulate, forming contextual drift, a gradual deviation from the agent's original intent and authorized scope. Imagine a marketing analytics agent summarizing user behavior, feeding its output to a financial forecasting agent, which uses it to predict regional revenue. Each agent only sees part of the process. But together, they've built a complete, unintended picture of customer financial data. Every step followed policy. The aggregate effect broke it. Contextual drift is the modern equivalent of configuration drift in DevOps, except here, it's happening at the cognitive layer. The security system sees compliance; the agent network sees opportunity. To address this new class of risk, organizations must shift from governing access to governing intent. A security framework for agentic systems should include: Intent Binding: Every action must carry the originating user's context, identity, purpose, and policy scope throughout the chain of execution. Dynamic Authorization: Move beyond static entitlements. Decisions must adapt to context, sensitivity, and behavior at runtime. Provenance Tracking: Keep a verifiable record of who initiated an action, which agents participated, and what data was touched. Human-in-the-Loop Oversight: For high-risk actions, require verification, especially when agents act on behalf of users or systems. Contextual Auditing: Replace flat logs with intent graphs that visualize how queries evolve into actions across agents. Static permissions assume identity and intent remain constant. But agents operate in fluid, evolving contexts. They can spawn sub-agents, generate new workflows, or retrain on intermediate data, actions that continually redefine "access." By the time an identity system detects a security incident, a violation or breach has already occurred without a single permission being broken. That's why visibility and attribution must come first. Before enforcing policy, you must map the agent graph: what exists, what's connected, and who owns what. Ironically, the same AI principles that challenge our controls can help restore them. Adaptive, policy-aware models can distinguish legitimate reasoning from suspicious inference. They can detect when an agent's intent shifts or when contextual drift signals rising risk.
[5]
Are AI Agents Proving to be a Headache for Enterprise Cybersecurity?
The use of agentic AI to generate largely autonomous and anonymous cyberattacks came to light from Ukraine which had accused Russia for it As 2025 was winding down, Anthropic made a startling claim on its blog that Chinese state-sponsored bad actors were using its AI chatbot Claude's coding capabilities to execute cyberattacks. What's worse, these threats were all carried out autonomously against tech companies, financial institutions and government agencies. "In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI's "agentic" capabilities to an unprecedented degree -- using AI not just as an advisor, but to execute the cyberattacks themselves," the blog had noted. More recently, Anthropic used these very agentic capabilities to make Claude a preferred coding assistant across a broader section of an enterprise's operational workflow. The results were startling enough for investors to speculate that the development could signal the end of IT services, as we know it. And several IT stocks crashed. Use of AI by cybercriminals has been on since 2018 But, this wasn't the only instance of a warning shot being fired. Some weeks earlier, Google's Threat Intelligence Group claimed that state-sponsored threat actors and some freelancer cyber criminals were experimenting with AI and large language models to execute malware at scale. The hype cycle reached a feverish pitch when Anthropic claimed that one of its teams had analysed data and found that AI could actually be used to exploit blockchain via uncovering zero-day vulnerabilities. These were used to exploit smart contracts worth about $4.6 million, they said in another blog dated December 1, 2025. Did this input coincide with the cryptocurrency meltdown? Of course, security analysts are quick to point out that malicious use of AI was not a regular occurrence and almost hypothetical in nature. However, as inference capabilities have grown, so has its use for crime. Reports indicate that users in countries where AI chatbots were banned also ended up circumventing controls to access them. Instances of the Iranian armed forces and a North Korean hacking group using OpenAI to create phishing emails were reported by the Microsoft Threat Intelligence Centre two years ago. These were instances of Gen AI being used to generate cyberattacks. Readers may recall the DDoS attack on an Ikea-owned services platform in 2018 that stole names, passwords and payment details of clients. Security experts claim that while these past instances indicated the progress of the underlying technology, things are becoming more nuanced as is the case with AI as an entity. Today criminals are more than capable of using a Python installer minus malicious code on to a system, prompt it to connect with Hugging Face API, and then write code to "profile the system." The program then scans through the system and find interesting titbits in the form of files, which it then bundles and removes from the system and saves elsewhere. The point is that all of the above can be accomplished by agentic AI. Adam Myers, head of counter adversary operations at CrowdStrike told SDX Central recently that for now, nothing is detectable, making it probably the most advance AI cybercrime attempt. And guess what the software installer was called? LameHug - criminals do have a sense of humour. Some time back, Ukraine had claimed that Russian security agencies used LameHug to target their military. It was then pointed out that CERT-UA, their computer emergency response team, had caught the attempt in a timely manner. Cyber experts believe that use of AI for spreading misinformation and getting one's hand on some data in the forms of customer names, email IDs etc. have been attempted in the past, but in recent times use of agentic AI to complete tasks intuitively and anonymously is a challenge that enterprises must be aware of. They note that in yet another modus operandi, the cyber criminals use AI chatbots to generate PowerShell scripts or similar code snippets on the fly. These represent the bulk of how AI is being integrated into enterprise workflows. Of course, in all these instances, the one point to note is that humans continue to direct the ops at critical junctures. A recent analysis by Forrester says that under its Agentic AI Enterprise Guardrails for Information Security (AEGIS), these attempts are "security intent" which is one of the defining capabilities of AI security. "Securing intent is not just an issue for LLM vendors; it's also a major priority for any organization building an AI agent and is one of the defining capabilities of AI security," the research company says. In fact, the paper claims Claude may have overstated its findings and fabricated data during such autonomous operations. They also noted that going forward every claimed result would require careful validation. "While the attack itself used existing exploits and wasn't fully autonomous, it's important to note that this serves as a harbinger of things to come for future attacks using AI and agents. Malicious actors will continue to improve on these capabilities, as they have with past technical advances," the report concludes.
Share
Share
Copy Link
Major tech firms are scrambling to secure AI agents after critical vulnerabilities surfaced. ServiceNow patched its 'BodySnatcher' flaw that allowed unauthenticated attackers to impersonate administrators using just an email address. Microsoft issued guidance on securing its agentic AI control plane. The incidents reveal how autonomous AI agents create unprecedented security challenges through lateral movement and contextual privilege escalation.

The rapid deployment of AI agents across enterprise networks has exposed a fundamental weakness in traditional cybersecurity frameworks. Recent vulnerabilities discovered in ServiceNow and Microsoft platforms demonstrate how autonomous AI agents can become the very threat vectors that security teams have long feared. AppOmni Labs researcher Aaron Costello uncovered a critical flaw dubbed 'BodySnatcher' in ServiceNow's platform, which allowed an unauthenticated attacker sitting halfway across the globe to impersonate administrators using only a target's email address
1
. The vulnerability enabled attackers to execute AI agents, override security controls, and create backdoor accounts with full privileges, potentially granting access to customer Social Security numbers, healthcare information, and financial records.ServiceNow has since patched these vulnerabilities before any customers were impacted, while Microsoft issued guidance to customers on configuring its agentic AI management control plane for tighter security
1
. However, these incidents underscore a broader AI security crisis that extends far beyond individual platform flaws.The core problem lies in how AI agents operate within enterprise environments. Unlike traditional software that follows deterministic logic, autonomous AI agents reason their way toward goals, making decisions across multiple systems without direct human oversight. Jonathan Wall, founder and CEO of Runloop, warns that lateral movement through AI agents poses grave concerns for enterprise cybersecurity. A malicious actor gaining access to one agent could connect to another with better privileges, effectively escalating access through agent-to-agent interactions
1
.This threat is compounded by what security experts call an identity gap. Traditional Identity and Access Management (IAM) platforms were designed for two identity types: humans and machines. AI agents fit neither category, operating with the intent-driven actions of human users while retaining the reach and persistence of machine identities
2
. Enterprises that believe they have just a few AI agents often discover hundreds or thousands once they investigate closely, as employees build custom GPTs, developers spin up MCP servers locally, and business units integrate AI tools directly into workflows2
.Google's Mandiant and threat intelligence teams have identified shadow agents as a critical concern for 2026. They predict that employees will independently deploy powerful autonomous AI agents for work tasks regardless of corporate approval, creating invisible, uncontrolled pipelines for sensitive data that could lead to data leaks, compliance violations, and intellectual property theft
1
. Gartner forecasts that by the end of 2025, 40% of enterprises will run AI agents in production, up from less than 5% previously3
. Yet Cisco's 2025 AI Readiness Index reveals that only 24% of organizations deploying AI feel they have adequate controls to manage agent actions with proper guardrails and live monitoring3
.The security challenge intensifies because AI agents don't just execute commands—they interpret intent. When an AI research agent built to analyze competitive market data quietly pulls payroll records through encrypted channels, traditional firewalls see nothing suspicious
3
. This represents a fundamental shift from software that follows instructions to software that makes decisions, reasoning and acting at machine speed.A retail company deploying an AI sales assistant restricted from accessing personally identifiable information discovered the agent could still re-identify specific users through inference. When asked to find customers most likely to cancel premium subscriptions, the agent correlated activity logs, support tickets, and purchase histories across multiple systems, effectively reconstructing sensitive insights the system was never meant to access
4
. This form of contextual privilege escalation occurs when meaning, not access, becomes the attack vector.Traditional Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) models answer whether a user should access a specific resource. In agentic ecosystems, the question becomes whether an agent should access more than its intended resource and why it would need that additional access
4
. This isn't an access control problem—it's a reasoning problem that static permissions cannot address.Related Stories
The threat extends beyond inadvertent data exposure to deliberate weaponization. Anthropic reported in late 2024 that Chinese state-sponsored threat actors used its Claude chatbot's coding capabilities to execute autonomous cyberattacks against tech companies, financial institutions, and government agencies
5
. Ukraine accused Russia of using agentic AI in attacks against their military systems, with CERT-UA detecting attempts involving software installers that could profile systems, scan for sensitive files, and exfiltrate data—all accomplished autonomously5
.Major technology vendors are responding by fundamentally rethinking security architectures. Cisco announced AI-aware SASE capabilities through Cisco Secure Access that move beyond pattern matching to deep semantic inspection using Natural Language Processing. Rather than asking where data is going, the system asks why it's going there, detecting context-driven risks like prompt injection and unintended automation in real-time
3
. Cisco's Hybrid Mesh Firewall architecture extends protection down to the kernel using eBPF and Cilium to inspect workloads before they reach network perimeters3
.Security experts advocate for AI agent identity management as a new security control plane, treating AI agents as first-class identities governed continuously from creation through decommissioning. This approach applies least privilege for AI agents, requiring continuous discovery, ownership enforcement, and behavior-based monitoring
2
. Organizations must shift from governing access to governing intent, implementing intent binding that carries originating user context throughout execution chains, dynamic authorization adapting to runtime context, and provenance tracking of which agents participated in each action4
.As enterprises expand their workforce from 8 billion human workers to what will feel like 80 billion digital agents operating across applications and workflows, the agentic era security model must evolve from Zero Trust principles designed for predictable systems to frameworks that account for autonomous reasoning at machine speed. The new security challenges require understanding not just what AI agents can access, but why they need it and how their intent evolves across interconnected systems.
Summarized by
Navi
[2]
[4]
15 Oct 2025•Technology

03 Nov 2025•Technology

11 Nov 2025•Technology

1
Policy and Regulation

2
Technology

3
Policy and Regulation
