4 Sources
4 Sources
[1]
Enterprises are not prepared for a world of malicious AI agents
Identity management is broken when it comes to AI agents. AI agents expand the threat surface of organizations. Part of the solution will be AI agents automating security. As enterprises begin implementing artificial intelligence agents, senior executives are on alert about the technology's risks but also unprepared, according to Nikesh Arora, chief executive of cybersecurity giant Palo Alto Networks. "There is beginning to be a realization that as we start to deploy AI, we're going to need security," said Arora to a media briefing in which I participated. "And I think the most amount of consternation is around the agent part," he said, "because customers are concerned that if they don't have visibility to the agents, if they don't understand what credentials agents have, it's going to be the Wild West in their enterprise platforms." Also: The best VPN services (and how to choose the right one for you) AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling a program to carry out a broader variety of actions. The approach could be a chatbot, such as ChatGPT, that has access to a corporate database via a technique like retrieval-augmented generation (RAG). An agent could require a more complex arrangement, such as the bot invoking a wide array of function calls to various programs simultaneously via, for example, the Model Context Protocol standard. The AI models can then invoke non-AI programs and orchestrate their operation in concert. All commercial software packages are adding agentic functions that automate some of the work a person would traditionally perform manually. The thrust of the problem is that the AI agents will have access to corporate systems and sensitive information in many of the same ways as human workers, but the technology to manage that access -- including verifying the identity of an AI agent, and verifying the things they have privileged access to -- is poorly organized for the rapid expansion of the workforce via agents. Although there is consternation, organizations don't yet fully grasp the enormity of securing agents, said Arora. Also: Even the best AI agents are thwarted by this protocol - what can be done "It requires tons of infrastructure investment, it requires tons of planning. And that's what worries me, is that our enterprises are still under the illusion that they are extremely secure." The problem is made more acute, said Arora, by the fact that bad actors are ramping up efforts to use agents to infiltrate systems and exfiltrate data, increasing the number of entities that must be verified or rejected for access. The lack of preparedness stems from the underdevelopment of techniques for identifying, authenticating, and granting access, said Arora. Most users in an organization are not regularly tracked, he said. "Today, the industry is well covered in the privileged access side," said Arora, referring to techniques known as privileged access management (PAM), which keeps track of a subset of users who are granted the greatest number of permissions. That process, however, leaves a big gap across the rest of the workforce. Also: RAG can make AI models riskier and less reliable, new research shows "We know what those people are doing, but we have no idea what the rest of those 90% of our employees are doing," said Arora, "because it's too expensive to track every employee today." Arora suggested that the approach is insufficient as agents expand the threat surface by being used to handle more tasks. Because "an [AI] agent is also a privileged access user, and also a regular user at some point in time," then any agent once created may gain access to "the crown jewels" of an organization at some point during the course of their functioning. As machines gain privileged access, "Ideally, I want to know all of my non-human identities, and be able to find them in one place and trace them." Also: AI usage is stalling out at work from lack of education and support Current "dashboards" of identity systems are not engineered to track the breadth of agents gaining access to this or that system, said Arora. "An agent needs the ability to act. The ability to act requires you to have some access to actions in some sort of control pane," he explained. "Those actions today are not easily configured in the industry on a cross-vendor basis. So, orchestration platforms are the place where these actions are actually configured." The threat is heightened by nation-states scaling up cyberattacks and by other parties seeking to compromise privileged users' credentials. "We are seeing smishing attacks, and high-stakes credential attacks across the entire population of an enterprise," said Arora, referring to "phishing via text message." These automatically generated texts aim to lure smartphone users into disclosing sensitive information, such as social security numbers, to escalate an attack on an organization by impersonating privileged users. Palo Alto's research has identified 194,000 internet domains being used to propagate smishing attacks. Arora's pitch to clients for dealing with this issue is twofold. First, his company is integrating the tools gained through this year's acquisition of identity management firm CyberArk. Palo Alto has never sold any identity management products, but Arora believes his firm can unify what is a fragmented collection of tools. "I think with the core and corpus of CyberArk, we are going to be able to expand their capabilities past just the privileged users across the entire enterprise and be able to provide a cohesive platform for identity," said Arora. "With the arrival of agentic AI [...] the opportunity is now ripe for our customers to take a look at it and say, 'How many identity systems do I have? How are all my credentials managed across the cloud, across production workloads, across the privilege space, across the IAM [identity and access management] space?'" The second prong of a solution, he said, is to use more agentic technology in the security products, to automate some of the tasks associated with a chief information security officer and their teams. "As we start talking about agentic AI, we start talking about agents or automated workflows doing more and more of the work," he said. Also: Is your company spending big on new tech? Here are 5 ways to prove it's paying off To that end, Arora is pitching a new offering, Cortex AgentiX, which employs automation trained on "1.2 billion real-world playbook executions" of cyber threats. The various agent components can automatically hunt for "emerging adversary techniques," the company said. The tools can analyze computing endpoints, such as PCs or email systems, to gather forensic data after attacks for security operations center (SOC) analysts to make a human decision about how to proceed with remediation. "We're taking what is a task that is manually impossible," Arora said of the AgentiX techniques. "You can't process terabytes of data manually and go figure out what the problem is and solve the problem," he said. "So, SOC analysts are now going to spend their time looking at the complex problems, saying, 'How do I solve the problem?' And they'll have all the data that they need to solve the problem." Arora was quick to add that Palo Alto's products will still largely involve approvals by SOC analysts. "Most of our agents will have humans in the middle where our customers will be able to see the work that is done by the agent, confirm it, and then go forth and take the action," he said. Over time, Arora said that greater autonomy may be granted to AI agents to handle security: "As we get better at it, we're going to allow our customers to say, 'Okay, I've done this five times with me watching it [the AI agent], it's doing it right, I'm going to approve it, allow it to act on my behalf.'"
[2]
Why your SOC's new AI agent might be a malicious actor in disguise
Standing up and running a modern Security Operations Center (SOC) is no small feat. Most organizations -- especially mid-sized enterprises -- simply don't have the time, budget, or specialized staff to build one in-house, let alone keep up with the pace of innovation. That's why many are turning to managed security providers. But not all are created equal -- especially when it comes to their use of AI and automation. As cybersecurity threats grow in speed, sophistication, and scale, security operations teams are turning to multi-agent systems (MAS) to extend their capabilities. These systems -- made up of intelligent, autonomous agents -- offer a way to scale threat detection and response while reducing analyst fatigue and response time. However, deploying a MAS in a SOC is far from trivial. It's not just about writing clever code or connecting a few APIs. Without the right safeguards, these autonomous systems can become a dangerous liability. Multi-agent systems for incident response must function collaboratively, reason independently, and make timely, high-stakes decisions -- often in complex and hostile environments. From vulnerabilities and hallucinations to autonomy and trust, MAS introduces a whole new set of technical challenges that teams must solve for AI to truly become a force multiplier in cybersecurity, rather than a threat itself. For MAS to work effectively in a SOC environment, agents must coordinate seamlessly across disparate systems -- sharing intelligence, workload, and intent. This coordination is complex. Agents need robust communication protocols that prevent data bottlenecks and race conditions. Moreover, they must share a common understanding of terminology and context, even if they're parsing information from entirely different sources (e.g., SIEM logs, EDR telemetry, cloud identity signals). Without semantic alignment and synchronization, agents risk working in silos -- or worse, generating conflicting conclusions. While MAS promises scalability, it also introduces a paradox: the more agents in the system, the harder it becomes to manage their interactions. As agents proliferate, the number of potential interactions increases exponentially. This makes system design, resource management, and fault tolerance significantly more challenging. To maintain speed and reliability, developers must build dynamic load-balancing, state management, and orchestration frameworks that prevent the system from tipping into chaos as it scales. The whole point of MAS is autonomy -- but full independence can be dangerous in high-stakes environments like incident response. Developers must walk a fine line between empowering agents to act decisively and maintaining enough oversight to prevent cascading errors. This requires robust decision-making frameworks, logic validation, and often a "human-in-the-loop" failsafe to ensure agents can escalate edge cases when needed. The system must support policy-driven autonomy -- where rules of engagement and confidence thresholds dictate when an agent can act alone vs. seek review. One of the most insidious challenges in multi-agent AI systems is hallucination -- when agents confidently generate incorrect or misleading outputs. In the context of security operations, this could mean misclassifying an internal misconfiguration as an active threat or vice versa. Hallucinations can stem from incomplete training data, poorly tuned models, or flawed logic chains passed between agents. Preventing them requires strong grounding techniques, rigorous system validation, and tight feedback loops where agents can check each other's reasoning or flag anomalies to a supervising human analyst. MAS must operate within environments that are often under active attack. Each agent becomes a potential attack surface -- and a potential insider threat if compromised by an external actor. Security measures must include encrypted communication between agents, strict access control policies, and agent-level audit logging. Additionally, MAS must be built with privacy by design, ensuring that sensitive information is processed and stored in compliance with data protection laws like GDPR or HIPAA. Trustworthy agents are not just effective -- they're secure by default. Security tech stacks are notoriously fragmented. For MAS to work in a real-world SOC, agents must interoperate with a wide variety of platforms -- each with their own data schemas, APIs, and update cadences. This requires designing agents that can both translate and normalize data, often on the fly. It also means building modular, extensible frameworks that allow new agents or connectors to be added without disrupting the system as a whole. For multi-agent systems to succeed in security operations, human analysts need to trust what the agents are doing. That trust isn't built through blind faith -- it comes from transparency, auditability, and explainability. Below are several foundational strategies: Explainable outputs: Agents should provide not just answers, but reasoning chains -- summaries of the evidence, logic, and decision path used. Continuous feedback loops: Every human-validated or rejected outcome should feed back into the system to improve agent reasoning over time. Defined escalation paths: MAS should know when to act, when to pause, and when to escalate. Confidence thresholds and incident criticality scores help enforce this. Ethical AI guidelines: Development teams should follow a defined ethical framework to prevent bias, protect privacy, and ensure accountability. Multi-agent systems have the potential to fundamentally change how the cybersecurity industry responds to security incidents -- shifting from alert triage to autonomous, full-context investigation and resolution. However, that shift only happens if security professionals approach MAS with rigor. These systems must be designed not just for intelligence, but for interoperability, trust, and resilience against subversion. For developers, security architects, and AI scientists alike, the challenge isn't whether MAS can be powerful -- it's whether it can be built and implemented responsibly, with scale, and safety as a top priority. A system that isn't secure can be worse than no system at all. If we do, we won't just be automating SecOps. We'll be redefining it. We've featured the best encryption software.
[3]
The rise of agentic AI in cybersecurity
Agentic AI is quickly emerging as the next major disruption in the tech industry, moving AI from just chatbots into autonomous decision makers. Unlike traditional AI tools that require constant prompting, agentic AI operates with a degree of independence, learning, reasoning and acting in pursuit of specific goals. In fact, by 2028, one-third of enterprise applications will include agentic AI, up from less than 1% in 2024, with up to 15% of routine workplace decisions made autonomously. For enterprise leaders, this represents a huge change in how technology supports and shapes the business, particularly in the field of cybersecurity. Indeed, Agentic AI has the capacity to transform teams' ability to escalate the most critical risks at intake, ensure higher-quality submissions, and filter out duplicates so teams can focus on what matters. Yet the autonomy that makes it powerful also introduces new risks for security teams. Enterprise leaders need to understand both how Agentic AI can strengthen their defenses and what pitfalls to watch for when deploying it. Agentic AI is built around autonomous agents, with systems able to reason, adapt and take independent action. It's a departure from both conventional automation and earlier forms of AI. Traditional machine learning models largely produce outputs based on prompts or fixed parameters. Agentic AI, by contrast, can operate iteratively, evaluate context, plan a course of action, adapt when conditions change and improve through experience. The cybersecurity industry is moving away from the past where bots just flag suspicious logins and towards a connected system that autonomously investigates, escalates priority vulnerabilities, and provides actionable insights to the user. Agentic AI is well suited to some of the most pressing challenges in security: Threat detection and response: Security operations centers (SOCs) are often inundated with alerts, many of them false positives. Agentic AI can autonomously investigate routine alerts, escalating only those that require human judgment. This reduces "alert fatigue" and allows analysts to focus on high-priority incidents. The impact can lower the time to detect issues, shortening the window attackers have to exploit vulnerabilities, while reducing the time to remediation. Penetration testing: Agentic AI can accelerate vulnerability discovery by scanning attack surfaces and finding common issues at scale. Human testers are then freed to focus on the creative, high-impact aspects of testing that machines cannot replicate. The result is broader coverage and more frequent, cost-effective testing. Vulnerability management and validation: Noise in vulnerability management is at an all-time high, frustrating in-house security teams. Prioritizing which vulnerabilities to remediate by validating what is real is a complex task. This requires historical context, business impact analysis and technical expertise. Agentic AI can perform much of the groundwork, such as standardizing reports, comparing with past incidents and recommending actions, while keeping humans in the loop to prioritize business impact in final decisions. Scalability: Recruiting and retaining skilled analysts is difficult and expensive. By automating large parts of security workflows, Agentic AI can chain tools together and adapt to feedback. This enables organizations to limit cost increases, and keep staff focused on strategic priorities that require human ingenuity. Of course, using agentic AI to strengthen cybersecurity is only half the story. The security of agentic AI itself must also be treated as a priority. Otherwise, the very systems designed to protect the enterprise could become new attack vectors. While autonomy brings advantages, it also requires careful oversight. Left unchecked, agentic AI can misjudge, misfire or be manipulated. Therefore, it's important to pay close attention to the following areas: Prompt injection: As AI agents interact with external data sources, attackers can embed malicious instructions designed to steer outcomes. A prompt injection that seems trivial in a chatbot can cause far greater damage when an autonomous agent is making security decisions. Therefore, it's essential to maintain continuous monitoring and implement robust guardrails. Data access and privacy: AI systems excel at processing large datasets, which creates risk if access controls are weak. As a result, sensitive information buried in overlooked repositories can be inadvertently exposed. Organizations need to have strong data governance and strict control of training and operational datasets. Jailbreaking: Even with guardrails, threat actors may attempt to "jailbreak" an AI system, convincing it to ignore restrictions and act outside its intended scope. Combined with prompt injection, this could lead to severe outcomes, such as unauthorized financial transfers. To reduce these risks, organizations should implement ongoing red teaming to stress test AI systems. As AI adoption is expected to grow at an annual rate of 36.6% between 2023 and 2030, this is both an opportunity and a challenge. If enterprises do not embrace agentic AI, the asymmetry between attackers and defenders will widen, particularly given the ongoing skills shortage in cybersecurity. With it, security teams can multiply their capacity, reduce time-to-response and move from reactive firefighting to continuous threat management. To achieve balance, agentic AI should be deployed with clear governance frameworks, human oversight at critical stages and a strong focus on data security. Collaboration between developers, security professionals and policymakers will be central to ensuring these systems serve the interests of organizations and wider society. We've featured the best AI website builder.
[4]
AI Becomes Both Tool and Target in Cybersecurity | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Here are the details. The company introduced a beta version of a solution called Aardvark. Powered by GPT-5, it acts as an autonomous "security researcher" that continuously scans source code to identify and fix vulnerabilities in real time. Unlike traditional methods such as fuzzing or static analysis, Aardvark uses large language model-based reasoning to understand how code behaves, determine where it might break, and propose targeted fixes. The system integrates directly with GitHub and OpenAI Codex, reviewing every code commit, running sandboxed validation tests to confirm exploitability, and even generating annotated patches for human approval. OpenAI describes Aardvark as a co-worker for engineers, augmenting rather than replacing human oversight by automating the tedious, error-prone parts of vulnerability discovery. According to OpenAI, Aardvark has already been running across internal and partner codebases for several months, detecting meaningful vulnerabilities and achieving a 92% recall rate in benchmark tests. Beyond enterprise use, the system has responsibly disclosed multiple vulnerabilities in open-source software, ten of which have received CVE identifiers. OpenAI positions Aardvark as part of a broader "defender-first" approach to security, one that democratizes access to high-end expertise and enables continuous, scalable protection across modern software ecosystems. The company is offering pro bono scanning to select open-source projects and has opened a private beta to refine accuracy and reporting workflows before broader release. Another report, this one from CSO.com, says that agentic AI is emerging as one of the most transformative forces in cybersecurity. The technology's ability to process data continuously and react in real time enables it to detect, contain, and neutralize threats at a scale and speed that human teams cannot match. Security leaders such as Zoom CISO Sandra McLeod and Dell Technologies CSO John Scimone told CSO that autonomous detection, self-healing responses, and AI-driven orchestration are now essential for reducing the time a threat remains active. By taking over high-volume, time-sensitive monitoring tasks, agentic AI lets security teams concentrate on strategy and risk mitigation rather than routine operations. The article outlines seven leading use cases where AI agents are already reshaping defense capabilities, from autonomous threat detection and Security Operations Center (SOC) support to automated triage, help desk automation, and real-time zero-trust enforcement. Deloitte's Naresh Persaud highlights how AI agents can draft forensic reports and scale SOC workflows dynamically, while Radware's Pascal Geenens notes that agentic systems close the gap between detection and response by automatically enriching and correlating data across threat feeds. The piece also underscores the technology's human-capital benefit: AI agents, as Palo Alto Networks' Rahul Ramachandran argues, act as a "force multiplier" for cybersecurity teams facing persistent talent shortages. Beyond defense, AI is also improving brand protection by spotting phishing domains and scam ads before they spread. "Agentic AI will level the playing field by enabling defenders to respond with equal speed and expansive breadth," said John Scimone, president and CSO at Dell Technologies. Promising developments for sure but no silver bullets. Another report last week detailed some of the issues that agentic AI will bring to the CISO tasked with developing it. A Bleeping Computer article warns that the rise of autonomous AI agents is upending traditional enterprise security models by creating a new category of non-human identities (NHIs). Unlike human users, these AI agents make decisions, act across systems, and persist in environments without oversight -- creating what the article calls "agent sprawl." Many continue operating long after their intended use, holding active credentials that attackers can exploit. The piece identifies three key technical risks: shadow agents that outlive their purpose, privilege escalation through over-permissioned agents, and large-scale data exfiltration caused by poorly scoped or compromised integrations. Together, these vulnerabilities expose a governance gap that conventional identity and access management (IAM) systems are ill-equipped to handle. The article argues for an "identity-first" approach to agentic AI security, one that treats every AI agent as a managed digital identity with tightly scoped permissions, ownership, and auditability. Legacy tools fail, it says, because they assume human intent and static interaction patterns, while AI agents spawn sub-agents, chain API calls, and operate autonomously across applications. To counter that complexity, CISOs are urged to take immediate steps: inventory all agents, assign human owners, enforce least-privilege access, propagate identity context across multi-agent chains, and monitor anomalous behavior. Token Security concludes that the real danger lies not in a specific exploit, but in the "illusion of safety" -- the assumption that trusted credentials equal trusted behavior. Without identity visibility and control, the article cautions, agentic AI could become the enterprise's next major attack vector.
Share
Share
Copy Link
Organizations are struggling to secure AI agents as they expand enterprise attack surfaces, while simultaneously deploying these same AI systems to enhance cybersecurity defenses. The dual nature of AI as both security tool and vulnerability creates new challenges for identity management and threat detection.
As enterprises increasingly deploy artificial intelligence agents across their operations, a critical security gap is emerging that threatens to undermine organizational defenses. According to Nikesh Arora, CEO of cybersecurity giant Palo Alto Networks, businesses are fundamentally unprepared for the security implications of AI agent deployment, creating what he describes as potential "Wild West" conditions within enterprise platforms
1
.
Source: TechRadar
The core issue stems from identity management systems that were designed for human users but are now being tasked with managing AI agents that operate with similar privileges and access rights. These agents, which can range from simple chatbots with database access to complex systems invoking multiple function calls simultaneously, represent a dramatic expansion of the enterprise attack surface
1
.Current privileged access management (PAM) systems cover only a small subset of users with the highest permissions, leaving approximately 90% of enterprise activities untracked due to cost constraints. This gap becomes critical when AI agents are deployed, as they can function as both privileged and regular users depending on their tasks, potentially gaining access to "crown jewels" of organizational data
1
.The challenge is compounded by what security experts term "agent sprawl" - the proliferation of AI agents that continue operating long after their intended purpose, maintaining active credentials that attackers can exploit. These non-human identities (NHIs) create shadow agents that outlive their usefulness while retaining dangerous levels of system access
4
.The deployment of multi-agent systems (MAS) in Security Operations Centers introduces additional complexity. While these systems promise to scale threat detection and reduce analyst fatigue, they create new attack vectors through their autonomous nature. Key vulnerabilities include prompt injection attacks, where malicious instructions can manipulate agent behavior, and jailbreaking attempts that convince AI systems to ignore safety restrictions
2
.
Source: PYMNTS
Hallucinations present another significant risk, where agents confidently generate incorrect outputs that could misclassify threats or create false security alerts. In security contexts, such errors could lead to inappropriate responses to genuine threats or unnecessary escalation of benign activities
2
.Related Stories
Despite these challenges, agentic AI is simultaneously emerging as a transformative force in cybersecurity defense. By 2028, analysts predict that one-third of enterprise applications will include agentic AI, with up to 15% of routine workplace decisions made autonomously
3
.
Source: TechRadar
The technology excels in several key areas: autonomous threat detection and response, where AI can investigate routine alerts and escalate only high-priority incidents; penetration testing that accelerates vulnerability discovery; and vulnerability management that helps prioritize remediation efforts. Security leaders from major organizations like Zoom and Dell Technologies report that autonomous detection and self-healing responses are becoming essential for reducing threat response times
4
.OpenAI has introduced Aardvark, a GPT-5 powered autonomous security researcher that represents a significant advancement in AI-driven vulnerability detection. The system continuously scans source code, identifies vulnerabilities in real-time, and proposes targeted fixes with a 92% recall rate in benchmark tests. Unlike traditional security tools, Aardvark uses large language model reasoning to understand code behavior and predict potential failure points
4
.The system integrates directly with development workflows through GitHub and OpenAI Codex, reviewing every code commit and running sandboxed validation tests. It has already detected meaningful vulnerabilities across internal and partner codebases, with ten discoveries receiving CVE identifiers in open-source software
4
.Summarized by
Navi
[3]
15 Oct 2025•Technology

11 Nov 2025•Technology

04 Aug 2025•Technology

1
Technology

2
Technology

3
Business and Economy
