2 Sources
2 Sources
[1]
Enterprises are not prepared for a world of malicious AI agents
Identity management is broken when it comes to AI agents. AI agents expand the threat surface of organizations. Part of the solution will be AI agents automating security. As enterprises begin implementing artificial intelligence agents, senior executives are on alert about the technology's risks but also unprepared, according to Nikesh Arora, chief executive of cybersecurity giant Palo Alto Networks. "There is beginning to be a realization that as we start to deploy AI, we're going to need security," said Arora to a media briefing in which I participated. "And I think the most amount of consternation is around the agent part," he said, "because customers are concerned that if they don't have visibility to the agents, if they don't understand what credentials agents have, it's going to be the Wild West in their enterprise platforms." Also: The best VPN services (and how to choose the right one for you) AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling a program to carry out a broader variety of actions. The approach could be a chatbot, such as ChatGPT, that has access to a corporate database via a technique like retrieval-augmented generation (RAG). An agent could require a more complex arrangement, such as the bot invoking a wide array of function calls to various programs simultaneously via, for example, the Model Context Protocol standard. The AI models can then invoke non-AI programs and orchestrate their operation in concert. All commercial software packages are adding agentic functions that automate some of the work a person would traditionally perform manually. The thrust of the problem is that the AI agents will have access to corporate systems and sensitive information in many of the same ways as human workers, but the technology to manage that access -- including verifying the identity of an AI agent, and verifying the things they have privileged access to -- is poorly organized for the rapid expansion of the workforce via agents. Although there is consternation, organizations don't yet fully grasp the enormity of securing agents, said Arora. Also: Even the best AI agents are thwarted by this protocol - what can be done "It requires tons of infrastructure investment, it requires tons of planning. And that's what worries me, is that our enterprises are still under the illusion that they are extremely secure." The problem is made more acute, said Arora, by the fact that bad actors are ramping up efforts to use agents to infiltrate systems and exfiltrate data, increasing the number of entities that must be verified or rejected for access. The lack of preparedness stems from the underdevelopment of techniques for identifying, authenticating, and granting access, said Arora. Most users in an organization are not regularly tracked, he said. "Today, the industry is well covered in the privileged access side," said Arora, referring to techniques known as privileged access management (PAM), which keeps track of a subset of users who are granted the greatest number of permissions. That process, however, leaves a big gap across the rest of the workforce. Also: RAG can make AI models riskier and less reliable, new research shows "We know what those people are doing, but we have no idea what the rest of those 90% of our employees are doing," said Arora, "because it's too expensive to track every employee today." Arora suggested that the approach is insufficient as agents expand the threat surface by being used to handle more tasks. Because "an [AI] agent is also a privileged access user, and also a regular user at some point in time," then any agent once created may gain access to "the crown jewels" of an organization at some point during the course of their functioning. As machines gain privileged access, "Ideally, I want to know all of my non-human identities, and be able to find them in one place and trace them." Also: AI usage is stalling out at work from lack of education and support Current "dashboards" of identity systems are not engineered to track the breadth of agents gaining access to this or that system, said Arora. "An agent needs the ability to act. The ability to act requires you to have some access to actions in some sort of control pane," he explained. "Those actions today are not easily configured in the industry on a cross-vendor basis. So, orchestration platforms are the place where these actions are actually configured." The threat is heightened by nation-states scaling up cyberattacks and by other parties seeking to compromise privileged users' credentials. "We are seeing smishing attacks, and high-stakes credential attacks across the entire population of an enterprise," said Arora, referring to "phishing via text message." These automatically generated texts aim to lure smartphone users into disclosing sensitive information, such as social security numbers, to escalate an attack on an organization by impersonating privileged users. Palo Alto's research has identified 194,000 internet domains being used to propagate smishing attacks. Arora's pitch to clients for dealing with this issue is twofold. First, his company is integrating the tools gained through this year's acquisition of identity management firm CyberArk. Palo Alto has never sold any identity management products, but Arora believes his firm can unify what is a fragmented collection of tools. "I think with the core and corpus of CyberArk, we are going to be able to expand their capabilities past just the privileged users across the entire enterprise and be able to provide a cohesive platform for identity," said Arora. "With the arrival of agentic AI [...] the opportunity is now ripe for our customers to take a look at it and say, 'How many identity systems do I have? How are all my credentials managed across the cloud, across production workloads, across the privilege space, across the IAM [identity and access management] space?'" The second prong of a solution, he said, is to use more agentic technology in the security products, to automate some of the tasks associated with a chief information security officer and their teams. "As we start talking about agentic AI, we start talking about agents or automated workflows doing more and more of the work," he said. Also: Is your company spending big on new tech? Here are 5 ways to prove it's paying off To that end, Arora is pitching a new offering, Cortex AgentiX, which employs automation trained on "1.2 billion real-world playbook executions" of cyber threats. The various agent components can automatically hunt for "emerging adversary techniques," the company said. The tools can analyze computing endpoints, such as PCs or email systems, to gather forensic data after attacks for security operations center (SOC) analysts to make a human decision about how to proceed with remediation. "We're taking what is a task that is manually impossible," Arora said of the AgentiX techniques. "You can't process terabytes of data manually and go figure out what the problem is and solve the problem," he said. "So, SOC analysts are now going to spend their time looking at the complex problems, saying, 'How do I solve the problem?' And they'll have all the data that they need to solve the problem." Arora was quick to add that Palo Alto's products will still largely involve approvals by SOC analysts. "Most of our agents will have humans in the middle where our customers will be able to see the work that is done by the agent, confirm it, and then go forth and take the action," he said. Over time, Arora said that greater autonomy may be granted to AI agents to handle security: "As we get better at it, we're going to allow our customers to say, 'Okay, I've done this five times with me watching it [the AI agent], it's doing it right, I'm going to approve it, allow it to act on my behalf.'"
[2]
AI Becomes Both Tool and Target in Cybersecurity | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Here are the details. The company introduced a beta version of a solution called Aardvark. Powered by GPT-5, it acts as an autonomous "security researcher" that continuously scans source code to identify and fix vulnerabilities in real time. Unlike traditional methods such as fuzzing or static analysis, Aardvark uses large language model-based reasoning to understand how code behaves, determine where it might break, and propose targeted fixes. The system integrates directly with GitHub and OpenAI Codex, reviewing every code commit, running sandboxed validation tests to confirm exploitability, and even generating annotated patches for human approval. OpenAI describes Aardvark as a co-worker for engineers, augmenting rather than replacing human oversight by automating the tedious, error-prone parts of vulnerability discovery. According to OpenAI, Aardvark has already been running across internal and partner codebases for several months, detecting meaningful vulnerabilities and achieving a 92% recall rate in benchmark tests. Beyond enterprise use, the system has responsibly disclosed multiple vulnerabilities in open-source software, ten of which have received CVE identifiers. OpenAI positions Aardvark as part of a broader "defender-first" approach to security, one that democratizes access to high-end expertise and enables continuous, scalable protection across modern software ecosystems. The company is offering pro bono scanning to select open-source projects and has opened a private beta to refine accuracy and reporting workflows before broader release. Another report, this one from CSO.com, says that agentic AI is emerging as one of the most transformative forces in cybersecurity. The technology's ability to process data continuously and react in real time enables it to detect, contain, and neutralize threats at a scale and speed that human teams cannot match. Security leaders such as Zoom CISO Sandra McLeod and Dell Technologies CSO John Scimone told CSO that autonomous detection, self-healing responses, and AI-driven orchestration are now essential for reducing the time a threat remains active. By taking over high-volume, time-sensitive monitoring tasks, agentic AI lets security teams concentrate on strategy and risk mitigation rather than routine operations. The article outlines seven leading use cases where AI agents are already reshaping defense capabilities, from autonomous threat detection and Security Operations Center (SOC) support to automated triage, help desk automation, and real-time zero-trust enforcement. Deloitte's Naresh Persaud highlights how AI agents can draft forensic reports and scale SOC workflows dynamically, while Radware's Pascal Geenens notes that agentic systems close the gap between detection and response by automatically enriching and correlating data across threat feeds. The piece also underscores the technology's human-capital benefit: AI agents, as Palo Alto Networks' Rahul Ramachandran argues, act as a "force multiplier" for cybersecurity teams facing persistent talent shortages. Beyond defense, AI is also improving brand protection by spotting phishing domains and scam ads before they spread. "Agentic AI will level the playing field by enabling defenders to respond with equal speed and expansive breadth," said John Scimone, president and CSO at Dell Technologies. Promising developments for sure but no silver bullets. Another report last week detailed some of the issues that agentic AI will bring to the CISO tasked with developing it. A Bleeping Computer article warns that the rise of autonomous AI agents is upending traditional enterprise security models by creating a new category of non-human identities (NHIs). Unlike human users, these AI agents make decisions, act across systems, and persist in environments without oversight -- creating what the article calls "agent sprawl." Many continue operating long after their intended use, holding active credentials that attackers can exploit. The piece identifies three key technical risks: shadow agents that outlive their purpose, privilege escalation through over-permissioned agents, and large-scale data exfiltration caused by poorly scoped or compromised integrations. Together, these vulnerabilities expose a governance gap that conventional identity and access management (IAM) systems are ill-equipped to handle. The article argues for an "identity-first" approach to agentic AI security, one that treats every AI agent as a managed digital identity with tightly scoped permissions, ownership, and auditability. Legacy tools fail, it says, because they assume human intent and static interaction patterns, while AI agents spawn sub-agents, chain API calls, and operate autonomously across applications. To counter that complexity, CISOs are urged to take immediate steps: inventory all agents, assign human owners, enforce least-privilege access, propagate identity context across multi-agent chains, and monitor anomalous behavior. Token Security concludes that the real danger lies not in a specific exploit, but in the "illusion of safety" -- the assumption that trusted credentials equal trusted behavior. Without identity visibility and control, the article cautions, agentic AI could become the enterprise's next major attack vector.
Share
Share
Copy Link
As enterprises deploy AI agents, cybersecurity leaders warn of unprecedented risks from malicious actors and inadequate identity management systems. New tools like OpenAI's Aardvark show promise for defense, but the rapid expansion of non-human identities creates vulnerabilities that traditional security frameworks cannot handle.
As enterprises rapidly deploy artificial intelligence agents across their operations, cybersecurity leaders are sounding alarms about unprecedented security challenges that existing infrastructure cannot adequately address. Nikesh Arora, CEO of Palo Alto Networks, warns that organizations face a "Wild West" scenario as AI agents gain access to corporate systems without proper visibility or credential management
1
.
Source: ZDNet
AI agents, defined as artificial intelligence programs granted access to external resources beyond their core language models, are expanding rapidly across enterprise environments. These systems can access corporate databases through retrieval-augmented generation (RAG) techniques or invoke complex function calls across multiple programs simultaneously. The challenge lies in managing these non-human identities that operate with many of the same privileges as human workers but without adequate oversight mechanisms
1
.The fundamental problem stems from the inadequacy of current identity and access management (IAM) systems to handle the unique characteristics of AI agents. Unlike human users who follow predictable patterns, AI agents can spawn sub-agents, chain API calls, and operate autonomously across applications, creating what experts term "agent sprawl"
2
.Arora highlights a critical gap in current security practices: while privileged access management (PAM) systems effectively monitor high-permission users, approximately 90% of an organization's workforce operates without comprehensive tracking due to cost constraints. This becomes problematic when AI agents function as both privileged and regular users, potentially gaining access to an organization's "crown jewels" during their operational lifecycle
1
.The situation is further complicated by the persistence of AI agents in enterprise environments. Many continue operating long after their intended use, maintaining active credentials that attackers can exploit. This creates three primary technical risks: shadow agents that outlive their purpose, privilege escalation through over-permissioned agents, and large-scale data exfiltration caused by poorly scoped integrations
2
.Despite the challenges, AI technology also presents significant opportunities for enhancing cybersecurity defenses. OpenAI has introduced Aardvark, a beta solution powered by GPT-5 that functions as an autonomous "security researcher" continuously scanning source code for vulnerabilities
2
.Unlike traditional security methods such as fuzzing or static analysis, Aardvark employs large language model-based reasoning to understand code behavior, identify potential failure points, and propose targeted fixes. The system integrates directly with GitHub and OpenAI Codex, reviewing every code commit and running sandboxed validation tests to confirm exploitability. Initial results show promise, with Aardvark achieving a 92% recall rate in benchmark tests and successfully identifying vulnerabilities that have received CVE identifiers
2
.Related Stories
Agentic AI is transforming cybersecurity operations through seven key use cases: autonomous threat detection, Security Operations Center (SOC) support, automated triage, help desk automation, and real-time zero-trust enforcement. Security leaders from major organizations like Zoom and Dell Technologies emphasize that these systems enable detection, containment, and neutralization of threats at scales and speeds impossible for human teams
2
.The technology addresses critical human capital challenges in cybersecurity, acting as a "force multiplier" for teams facing persistent talent shortages. AI agents can draft forensic reports, scale SOC workflows dynamically, and automatically enrich and correlate data across threat feeds, closing the gap between detection and response
2
.Experts advocate for an "identity-first" approach to agentic AI security, treating every AI agent as a managed digital identity with tightly scoped permissions, clear ownership, and comprehensive auditability. This represents a fundamental shift from legacy tools that assume human intent and static interaction patterns
2
.
Source: PYMNTS
The urgency of addressing these challenges is heightened by the increasing sophistication of threat actors, including nation-states scaling up cyberattacks and automated "smishing" campaigns targeting enterprise credentials. As Arora notes, the solution will require substantial infrastructure investment and planning, areas where many enterprises remain underprepared despite believing they maintain strong security postures
1
.Summarized by
Navi