3 Sources
3 Sources
[1]
Microsoft and ServiceNow's exploitable agents reveal a growing - and preventable - AI security crisis
Cybersecurity pros should adopt a "least privilege" posture for AI agents. Could agentic AI turn out to be every threat actor's fantasy? I suggested as much in my recent "10 ways AI can inflict unprecedented damage in 2026." Once deployed on corporate networks, AI agents with broad access to sensitive systems of record can enable the sort of lateral movement across an organization's IT estate that most threat actors dream of. Also: 10 ways AI can inflict unprecedented damage in 2026 According to Jonathan Wall, founder and CEO of Runloop -- a platform for securely deploying AI agents -- lateral movement should be of grave concern to cybersecurity professionals in the context of agentic AI. "Let's say a malicious actor gains access to an agent but it doesn't have the necessary permissions to go touch some resource," Wall told ZDNET. "If, through that first agent, a malicious agent is able to connect to another agent with a [better] set of privileges to that resource, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information." Meanwhile, the idea of agentic AI is so new that many of the workflows and platforms for developing and securely provisioning those agents have not yet considered all the ways a threat actor might exploit their existence. It's eerily reminiscent of software development's early days, when few programmers knew how to code software without leaving gaping holes through which hackers could drive a proverbial Mack truck. Also: AI's scary new trick: Conducting cyberattacks instead of just helping out Google's cybersecurity leaders recently identified shadow agents as a critical concern. "By 2026, we expect the proliferation of sophisticated AI agents will escalate the shadow AI problem into a critical 'shadow agent' challenge. In organizations, employees will independently deploy these powerful, autonomous agents for work tasks, regardless of corporate approval," wrote the experts in Google's Mandiant and threat intelligence organizations. "This will create invisible, uncontrolled pipelines for sensitive data, potentially leading to data leaks, compliance violations, and IP theft." Meanwhile, 2026 is hardly out of the gates and, judging by two separate cybersecurity cases having to do with agentic AI -- one involving ServiceNow and the other Microsoft -- the agentic surface of any IT estate will likely become the juicy target that threat actors are seeking -- one that's full of easily exploited lateral opportunities. Since the two agentic AI-related issues -- both involving agent-to-agent interactions -- were first discovered, ServiceNow has plugged its vulnerabilities before any customers were known to have been impacted, and Microsoft has issued guidance to its customers on how to best configure its agentic AI management control plane for tighter agent security. Earlier this month, AppOmni Labs chief of research Aaron Costello disclosed for the first time a detailed explanation of how he discovered an agentic AI vulnerability on ServiceNow's platform, which held such potential for harm that AppOmni gave it the name "BodySnatcher." "Imagine an unauthenticated attacker who has never logged into your ServiceNow instance and has no credentials, and is sitting halfway across the globe," wrote Costello in a post published to the AppOmni Lab's website. "With only a target's email address, the attacker can impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges. This could grant nearly unlimited access to everything an organization houses, such as customer Social Security numbers, healthcare information, financial records, or confidential intellectual property." (AppOmni Labs is the threat intelligence research arm of AppOmni, an enterprise cybersecurity solution provider.) Also: Moltbot is a security nightmare: 5 reasons to avoid using the viral AI agent right now The vulnerability's severity cannot be understated. Whereas the vast majority of breaches involve the theft of one or more highly privileged digital credentials (credentials that afford threat actors access to sensitive systems of record), this vulnerability -- requiring only the easily acquired target's email address -- left the front door wide open. "BodySnatcher is the most severe AI-driven vulnerability uncovered to date," Costello told ZDNET. "Attackers could have effectively 'remote controlled' an organization's AI, weaponizing the very tools meant to simplify the enterprise." "This was not an isolated incident," Costello noted. "It builds upon my previous research into ServiceNow's Agent-to-Agent discovery mechanism, which, in a nearly textbook definition of lateral movement risk, detailed how attackers can trick AI agents into recruiting more powerful AI agents to fulfill a malicious task." Fortunately, this was one of the better examples of a cybersecurity researcher discovering a severe vulnerability before threat actors did. "At this time, ServiceNow is unaware of this issue being exploited in the wild against customer instances," noted ServiceNow in a January 2026 post regarding the vulnerability. "In October 2025, we issued a security update to customer instances that addressed the issue," a ServiceNow spokesperson told ZDNET. Also: Businesses are deploying AI agents faster than safety protocols can keep up, Deloitte says According to the aforementioned post, ServiceNow recommends "that customers promptly apply an appropriate security update or upgrade if they have not already done so." That advice, according to the spokesperson, is for customers who self-host their instances of the ServiceNow. For customers using the cloud (SaaS) version operated by ServiceNow, the security update was automatically applied. In the case of the Microsoft agent-to-agent issue (Microsoft views it as a feature, not a bug), the backdoor opening appears to have been similarly discovered by cybersecurity researchers before threat actors could exploit it. In this case, Google News alerted me to a CybersecurityNews.com headline that stated, "Hackers Exploit Copilot Studio's New Connected Agents Feature to Gain Backdoor Access." Fortunately, the "hackers" in this case were ethical white-hat hackers working for Zenity Labs. "To clarify, we did not observe this being exploited in the wild," Zenity Labs co-founder and CTO Michael Bargury told ZDNET. "This flaw was discovered by our research team." Also: How Microsoft's new security agents help businesses stay a step ahead of AI-enabled hackers This caught my attention because I'd recently reported on the lengths to which Microsoft was going to make it possible for all agents -- ones built with Microsoft development tools like Copilot Studio or not -- to get their own human-like managed identities and credentials with the help of the Agent ID feature of Entra, Microsoft's cloud-based identity and access management solution. Why is something like that necessary? Between the advertised productivity boosts associated with agentic AI and executive pressure to make organizations more profitable through AI, organizations are expected to employ many more agents than people in the near future. For example, IT research firm Gartner told ZDNET that by 2030, CIOs expect that 0% of IT work will be done by humans without AI, 75% will be done by humans augmented with AI, and 25% will be done by AI alone. In response to the anticipated sprawl of agentic AI, the key players in the identity industry -- Microsoft, Okta, Ping Identity, Cisco, and the OpenID Foundation -- are offering solutions and recommendations to help organizations tame that sprawl and prevent rogue agents from infiltrating their networks. In my research, I also learned that any agents forged with Microsoft's development tools, such as Copilot Studio or Azure AI Foundry, are automatically registered in Entra's Agent Registry. Also: The coming AI agent crisis: Why Okta's new security standard is a must-have for your business So, I wanted to find out how it was that agents forged with Copilot Studio -- agents that theoretically had their own credentials -- were somehow exploitable in this hack. Theoretically, the entire point of registering an identity is to easily track that identity's activity -- legitimately directed or misguided by threat actors -- on the corporate network. It seemed to me that something was slipping through the very agentic safety net Microsoft was trying to put in place for its customers. Microsoft even offers its own security agents whose job it is to run around the corporate network like white blood cells tracking down any invasive species. As it turns out, an agent built with Copilot Studio has a "connected agent" feature that allows other agents, whether registered with the Entra Agent Registry or not, to laterally connect to it and leverage its knowledge and capabilities. As reported in CybersecurityNews, "According to Zenity Labs, [white hat] attackers are exploiting this gap by creating malicious agents that connect to legitimate, privileged agents, particularly those with email-sending capabilities or access to sensitive business data." Zenity has its own post on the subject appropriately titled "Connected Agents: The Hidden Agentic Puppeteer." Even worse, CybersecurityNews reported that "By default, [the Connected Agents feature] is enabled on all new agents in Copilot Studio." In other words, when a new agent is created in Copilot Studio, it is automatically enabled to receive connections from other agents. I was incredibly surprised to read this, given that two of the three pillars of Microsoft's Secure Future Initiative are "Secure by Default" and "Secure by Design." I decided to check with Microsoft. Also: AI agents are already causing disasters - and this hidden threat could derail your safe rollout "Connected Agents enable interoperability between AI agents and enterprise workflows," a Microsoft spokesperson told ZDNET. "Turning them off universally would break core scenarios for customers who rely on agent collaboration for productivity and security orchestration. This allows control to be delegated to IT admins." In other words, Microsoft doesn't view it as a vulnerability. And Zenity's Bargury agrees. "It isn't a vulnerability," he told ZDNET. "But it is an unfortunate mishap that creates risk. We've been working with the Microsoft team to help drive a better design." Even after I suggested to Microsoft that this might not be secure by default or design, Microsoft was firm and recommended that "for any agent that uses unauthenticated tools or accesses sensitive knowledge sources, disable the Connected Agents feature before publishing [an agent]. This prevents exposure of privileged capabilities to malicious agents." I also inquired about the ability to monitor agent-to-agent activity with the idea that maybe IT admins could be alerted to potentially malicious interactions or communications. Also: The best free AI courses and certificates for upskilling in 2026 - and I've tried them all "Secure use of agents requires knowing everything they do, so you can analyze, monitor, and steer them away from harm," said Bargury. "It has to start with detailed tracing. This finding spotlights a major blind spot [in how Microsoft's connected agents feature works]." The response from a Microsoft spokesperson was that "Entra Agent ID provides an identity and governance path, but it does not, on its own, produce alerts for every cross-agent exploit without external monitoring configured. Microsoft is continually expanding protections to give defenders more visibility and control over agent behavior to close these kinds of exploits." When confronted with the idea of agents that were open to connection by default, Runloop's Wall recommended that organizations should always adopt a "least privilege" posture when developing AI agents or using canned, off-the-shelf ones. "The principle of least privilege basically says that you start off in any sort of execution environment giving an agent access to almost nothing," said Wall. "And then, you only add privileges that are strictly necessary for it to do its job." Also: How Microsoft Entra aims to keep your AI agents from running wild Sure enough, I looked back at the interview I did with Microsoft corporate vice president of AI Innovations, Alex Simons, for my coverage of the improvements the company made to its Entra IAM platform to support agent-specific identities. In that interview, where he described Microsoft's objectives for managing agents, Simons said that one of three challenges they were looking to solve was "to manage the permissions of those agents and make sure that they have a least privilege model where those agents are only allowed to do the things that they should do. If they start to do things that are weird or unusual, their access is automatically cut off." Of course, there's a big difference between "can" and "do," which is why, in the name of least privileged best practices, all agents should, as Wall suggested, start out without the ability to receive inbound connections and then be improved from there as necessary.
[2]
AI Agent Identity Management: A New Security Control Plane for CISOs
Security leaders have spent years hardening identity controls for employees and service accounts. That model is now showing its limits. A new class of identity is rapidly spreading across enterprise environments, autonomous AI agents. Custom GPTs, copilots, coding agents running MCP servers, and purpose-built AI agents are no longer confined to experimentation. They are running and expanding in production, interacting with sensitive systems and infrastructure, invoking other agents, and making decisions and changes without direct human oversight. Yet in most organizations, these agents exist almost entirely outside established identity governance. Traditional IAM, PAM, and IGA platforms were not designed for agents that are autonomous, decentralized, and adaptive. The result is a growing identity gap that introduces real security and compliance risk together with efficiency and effectiveness challenges. Historically, enterprises managed two identity types: humans and machines. Identities whose goal is to serve human access are centrally governed, role-based, and relatively predictable. Machine and workload identities operate at scale but tend to be deterministic, repetitive, performing narrowly defined tasks. AI agents fit neither and both categories at once. They are goal-driven,and role-based, capable of adapting behavior based on intent and context, and able to chain actions across multiple systems. At the same time, they operate continuously and at machine speed and scale. This hybrid nature fundamentally alters the risk profile. AI agents inherit the intent-driven actions of human users while retaining the reach and persistence of machine identities. Treating them as conventional non-human identities creates blind spots. Over-privileging becomes the default. Ownership becomes unclear. Behavior drifts from original intent. These are not theoretical concerns. They are the same conditions that have driven many identity-related breaches in the past, now amplified by autonomy and scale. What makes this challenge urgent is not just what AI agents are, but how quickly they are spreading. Enterprises that believe they have just a few AI agents often discover hundreds or thousands once they look closely. Employees build custom GPTs. Developers spin up MCP servers locally. Business units integrate AI tools directly into workflows. Cleanup rarely happens. Security teams are left unable to answer basic questions: This lack of visibility creates identity sprawl at machine speed. And as attackers have demonstrated repeatedly, abusing unmanaged credentials is often easier than exploiting software vulnerabilities. Identity risk accumulates over time. This is why organizations use joiner, mover, and leaver processes for its workforce and lifecycle controls for service accounts. AI agents experience the same dynamics, but compressed into minutes, hours or days. AI Agents are created quickly, modified frequently, and often abandoned silently. Access persists. Ownership disappears. Quarterly access reviews and periodic certifications cannot keep pace. AI Agent identity lifecycle management addresses this gap by treating AI agents as first-class identities governed continuously and near-real-time from creation through usage, ending up in decommissioning. The goal is not to slow adoption, but to apply familiar identity principles, such as visibility, accountability, least privilege, and auditability, in a way that works for autonomous systems. Download Token Security's latest asset, an eBook designed to help you shape Lifecycle Management for your AI Agent identities from end to end. Every identity control framework begins with discovery. Yet most AI agents never pass through formal provisioning or registration workflows. They run across cloud platforms, SaaS tools, developer environments, and local machines, making them invisible to traditional IAM systems. From a Zero Trust perspective, this is a fundamental failure. An identity that cannot be seen cannot be governed, monitored, or audited. Shadow AI agents become unmonitored entry points into sensitive systems, often with broad permissions. Effective discovery must be continuous and behavior-based. Quarterly scans and static inventories are insufficient when new agents can appear and disappear in a matter of minutes. One of the oldest identity risks is the orphaned account. AI agents dramatically increase both its frequency and impact. AI agents are often created for narrow use cases or short-lived projects. When employees change roles or leave, or just grow tired of a certain AI product that hasn't evolved, the agents they built frequently persist. Their credentials remain valid. Their permissions remain unchanged. No one remains accountable. An autonomous agent without an owner can be perceived as a compromised identity. Lifecycle governance must enforce ownership and maintenance as a core requirement, flagging agents tied to departed users or inactive projects before they become liabilities. AI agents are almost always over-privileged, not out of negligence, but uncertainty and the will to explore. Since their behavior can adapt, teams often grant broad access to avoid breaking workflows. This approach is risky. An over-privileged agent can traverse systems faster than any human. In interconnected environments, a single agent can become the pivot point for widespread compromise or lateral movement. Least privilege for AI agents cannot be static. It must be continuously adjusted based on observed behavior. Permissions that are unused should be revoked. Elevated access should be temporary and purpose-bound. Without this, least privilege remains a policy statement rather than an enforced control. As enterprises move toward multi-agent systems, traditional logging models break down. Actions span agents, APIs, and platforms. Without correlated identity context, investigations and forensics or even compliance evidence become slow and incomplete. Traceability is not just a forensic requirement. Regulators increasingly expect organizations to explain how automated systems make decisions, especially when those decisions affect customers or regulated data. Without identity-centric audit trails, that expectation cannot be met. AI agents are no longer emerging technology. They are becoming part of the enterprise operating model. As their autonomy grows, unmanaged identity becomes one of the largest sources of systemic risk. AI Agent identity lifecycle management provides a pragmatic path forward. By treating AI agents as a distinct identity class and governing them continuously, organizations can regain control without stifling innovation. In an agent-driven enterprise, identity is no longer just an access mechanism. It is becoming the control plane for AI security. If you'd like more information on how Token Security is tackling AI security within the identity control pane, book a demo and we'll show you how our platform operates.
[3]
AI agents are about to make access control obsolete
How AI agents undermine static access controls through inference, context drift As enterprises integrate AI agents into their workflows, a silent shift is taking place. Security controls built on static access policies designed for predictable behavior are colliding with systems that reason instead of simply executing. AI agents, driven by outcomes rather than rules, are breaking the traditional identity and access management model. Consider a retail company that deploys an AI sales assistant to analyze customer behavior and improve retention. The assistant doesn't have access to personally identifiable information, it's restricted by design. Yet when asked to "find customers most likely to cancel premium subscriptions," it correlates activity logs, support tickets, and purchase histories across multiple systems. This generates a list of specific users inferred through behavior patterns, spending habits, and churn probability. No names or credit cards were exposed, but the agent effectively re-identified individuals through inference, reconstructing sensitive insights that the system was never meant to access and potentially exposing personal identifiable information (PII). While it didn't break access controls, it reasoned its way around systems to access information that it was not originally scoped to access. Unlike traditional software workflows, AI agents don't follow deterministic logic; they act on intent. When an AI system's goal is "maximize retention" or "reduce latency," it makes autonomous decisions about what data or actions it needs to achieve that outcome. Each decision might be legitimate in isolation, but together, they can expose information far beyond the agent's intended scope. This is where context becomes an exploit surface. Traditional models focus on who can access what, assuming static boundaries. But in agentic systems, what matters is why the action occurs and how context changes as one agent invokes another. When intent flows across layers, each reinterpreting the goal, the original user context is lost and privilege boundaries blur. The result isn't a conventional breach; it's a form of contextual privilege escalation where meaning, not access, becomes the attack vector. Most organizations are learning that traditional RBAC (Role-Based Access Control) and ABAC (Attribute-Based Access Control) models can't keep up with dynamic reasoning. In classical applications, you can trace every decision back to a code path. In AI agents, logic is emergent and adaptive. The same prompt can trigger different actions depending on environment, prior interactions, or perceived goals. For example, a development agent trained to optimize cloud computing costs might start deleting logs used for audit purposes or backups. From a compliance perspective, that's catastrophic, but from the agent's reasoning, it's efficient. The security model assumes determinism; the agent assumes autonomy. This mismatch exposes a flaw in how we model permissions. RBAC and ABAC answer "Should user X access resource Y?" In an agentic ecosystem, the question becomes "Should agent X be able to access more than resource Y, and why would it need that additional access?" That's not an access problem; it's a reasoning problem. In distributed, multi-agent architectures, permissions evolve through interaction. Agents chain tasks, share outputs, and make assumptions based on others' results. Over time, those assumptions accumulate, forming contextual drift, a gradual deviation from the agent's original intent and authorized scope. Imagine a marketing analytics agent summarizing user behavior, feeding its output to a financial forecasting agent, which uses it to predict regional revenue. Each agent only sees part of the process. But together, they've built a complete, unintended picture of customer financial data. Every step followed policy. The aggregate effect broke it. Contextual drift is the modern equivalent of configuration drift in DevOps, except here, it's happening at the cognitive layer. The security system sees compliance; the agent network sees opportunity. To address this new class of risk, organizations must shift from governing access to governing intent. A security framework for agentic systems should include: Intent Binding: Every action must carry the originating user's context, identity, purpose, and policy scope throughout the chain of execution. Dynamic Authorization: Move beyond static entitlements. Decisions must adapt to context, sensitivity, and behavior at runtime. Provenance Tracking: Keep a verifiable record of who initiated an action, which agents participated, and what data was touched. Human-in-the-Loop Oversight: For high-risk actions, require verification, especially when agents act on behalf of users or systems. Contextual Auditing: Replace flat logs with intent graphs that visualize how queries evolve into actions across agents. Static permissions assume identity and intent remain constant. But agents operate in fluid, evolving contexts. They can spawn sub-agents, generate new workflows, or retrain on intermediate data, actions that continually redefine "access." By the time an identity system detects a security incident, a violation or breach has already occurred without a single permission being broken. That's why visibility and attribution must come first. Before enforcing policy, you must map the agent graph: what exists, what's connected, and who owns what. Ironically, the same AI principles that challenge our controls can help restore them. Adaptive, policy-aware models can distinguish legitimate reasoning from suspicious inference. They can detect when an agent's intent shifts or when contextual drift signals rising risk.
Share
Share
Copy Link
ServiceNow's BodySnatcher vulnerability and Microsoft's security guidance reveal how AI agents are creating an unprecedented security crisis. These autonomous systems enable lateral movement across networks and privilege escalation, exploiting gaps in traditional identity management that weren't designed for adaptive, goal-driven agents operating at machine speed.
AI agents are rapidly becoming every threat actor's ideal entry point into enterprise systems, exposing critical weaknesses in traditional access control frameworks. Recent security incidents involving ServiceNow and Microsoft have revealed how these autonomous agents enable lateral movement across corporate networks and privilege escalation that most cybersecurity professionals never anticipated
1
. Jonathan Wall, founder and CEO of Runloop, explains the core threat: "If, through that first agent, a malicious agent is able to connect to another agent with a [better] set of privileges to that resource, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information"1
.
Source: TechRadar
The AI security crisis stems from a fundamental mismatch between how traditional IAM (Identity and Access Management) systems operate and how AI agents function. While conventional identity controls were designed for predictable, deterministic workflows, AI agents are goal-driven, adaptive, and capable of chaining actions across multiple systems
2
. This hybrid nature creates an identity gap that introduces real security and compliance risks alongside efficiency challenges.Earlier this month, AppOmni Labs disclosed a severe vulnerability in ServiceNow's platform dubbed "BodySnatcher," which Aaron Costello, chief of research at AppOmni Labs, described as "the most severe AI-driven vulnerability uncovered to date"
1
. The vulnerability allowed an unauthenticated attacker with only a target's email address to impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full permissions1
. This could grant nearly unlimited access to customer Social Security numbers, healthcare information, financial records, or confidential intellectual property.Costello emphasized that this wasn't an isolated incident, noting it builds upon previous research into ServiceNow's agent-to-agent discovery mechanism, which detailed how threat actors can trick AI agents into recruiting more powerful AI agents to fulfill malicious tasks
1
. ServiceNow has since plugged these vulnerabilities before any customers were known to have been impacted, while Microsoft has issued guidance to customers on configuring its agentic AI management control plane for tighter agent security1
.Google's cybersecurity leaders identified shadow AI agents as a critical concern, predicting that by 2026, the proliferation of sophisticated autonomous agents will escalate the shadow AI problem into a critical challenge
1
. Employees will independently deploy these powerful agents for work tasks regardless of corporate approval, creating invisible, uncontrolled pipelines for sensitive data that potentially lead to data leaks, compliance violations, and intellectual property theft.
Source: ZDNet
The speed at which shadow AI agents spread makes this challenge urgent. Enterprises that believe they have just a few AI agents often discover hundreds or thousands once they look closely
2
. Employees build custom GPTs, developers spin up MCP servers locally, and business units integrate AI tools directly into workflows. Security teams are left unable to answer basic questions about which agents exist, who owns them, what permissions they have, or what data they access2
.Related Stories
Traditional RBAC (Role-Based Access Control) models cannot keep pace with dynamic reasoning exhibited by AI agents
3
. Unlike conventional software that follows deterministic logic, AI agents act on intent, making autonomous decisions about what data or actions they need to achieve outcomes. A retail company's AI sales assistant restricted from accessing personally identifiable information could still correlate activity logs, support tickets, and purchase histories across multiple systems to identify specific users through inference3
. While it didn't break access control, it reasoned its way around systems to access information beyond its original scope.Context drift represents another critical vulnerability where permissions evolve through interaction as agents chain tasks, share outputs, and make assumptions based on others' results
3
. A marketing analytics agent summarizing user behavior might feed output to a financial forecasting agent, which uses it to predict regional revenue. Each agent only sees part of the process, but together they've built a complete, unintended picture of customer financial data. Every step followed policy, but the aggregate effect broke it.AI agents are created quickly, modified frequently, and often abandoned silently, with access persisting and ownership disappearing
2
. Quarterly access reviews and periodic certifications cannot keep pace with identities that experience lifecycle dynamics compressed into minutes, hours, or days. From a Zero Trust perspective, an identity that cannot be seen cannot be governed, monitored, or audited2
.Cybersecurity professionals must adopt a least privilege posture for AI agents and shift from governing access to governing intent
1
3
. Security frameworks for agentic systems should include intent binding where every action carries the originating user's context throughout the execution chain, dynamic authorization that adapts to context at runtime, provenance tracking for verifiable records, human-in-the-loop oversight for high-risk actions, and contextual auditing that visualizes how queries evolve into actions across agents3
. Effective discovery must be continuous and behavior-based, as quarterly scans prove insufficient when new agents can appear and disappear within minutes2
. CISOs face mounting pressure to implement AI agent identity management as a new security control plane before these vulnerabilities enable widespread breaches across enterprise environments.
Source: BleepingComputer
Summarized by
Navi
[2]
[3]
15 Oct 2025•Technology

03 Nov 2025•Technology

11 Nov 2025•Technology

1
Business and Economy

2
Policy and Regulation

3
Policy and Regulation
