10 Sources
10 Sources
[1]
Rolling out AI? 5 security tactics your business can't get wrong - and why
Professionals must develop tactics to embrace AI without risk. The same capabilities that make AI useful also make it exploitable. In fact, the rate at which emerging technologies are advancing intensifies that uncomfortable reality by the minute. While professionals might not want to expose their organizations to new threats, they also recognize the risk of falling behind as other businesses seek to gain a competitive edge by implementing AI. Also: AI agents are fast, loose and out of control, MIT study finds So, what should you do about this challenging conundrum? Five business leaders share five ways that professionals can ensure great security in an age of AI. Barry Panayi, group chief data officer at Howden, an insurance intermediary group, said one of the big benefits of working for his organization is that many staff members know the cyber risks associated with AI. "Because we provide cyber insurance as a business, we have people who understand this area," he said. "So, therefore, it's not just a tech person who understands security, and it's not just a data or an AI specialist." As an executive charged with ensuring AI is implemented safely and securely, Panayi encouraged professionals across all organizations to boost their cyber credentials: "I think people have to know more about security in their roles." Panayi said the multifaceted nature of AI cybersecurity means professionals should expect new roles and responsibilities to emerge, with people sharing knowledge and swapping between teams to create a more powerful approach. "I know the best security specialists are the ones talking to my AI teams and asking them, 'How would this work, and how would that work?'" he said. "And the AI teams, conversely, speak to information security experts and ensure their processes are not a blocker as we look to make systems more secure." Nick Pearson, CIO at technology specialist Ricoh Europe, said that managing cybersecurity in an age of AI requires a multidimensional approach -- and he finds new dimensions almost every day. Pearson told ZDNET that professionals could feel overwhelmed by the breadth of threats associated with emerging technology. Yet his conversations with other experts, including Ricoh Europe's CISO, suggest that it's important to place AI cyber threats in context. "Great security still goes back to the basics of good practices," he said. "So, we secure by design, we've got standards, we've got capabilities, and we've got teams that analyze, check, and balance." Pearson said professionals should ensure that data is managed and governed effectively. Rather than reinventing the wheel, find a way to absorb AI into your existing frameworks. "Otherwise, you can end up with something separate from what is good practice on data leakage, for example, which, in our case, has been there for 15 years," he said. Martin Hardy, cyber portfolio and architecture director at Royal Mail, said one crucial component for his firm's cyber approach is an internal AI governance forum. "We don't stop people using AI, but where we're building AI into applications, we're making sure it's got some level of governance around it," he said. "Understanding where our data is and what data is going into those AI solutions is the key to success, as is what we're then asking those solutions to do." While not wanting to underestimate the potential power of emerging technology, Hardy told ZDNET that it's crucial professionals view AI as a tool rather than an end in itself. Exploiting AI effectively and securely is about managing data and deciphering potential use cases. "There are going to be instances where people use AI and get it wrong," he said. "Success is about changing the mentality to one that suggests, 'This is an aid, not the answer.'" John-David Lovelock, chief forecaster and distinguished VP analyst at Gartner, said digital leaders and business professionals must consider cyber threats as they invest in AI through 2026. Lovelock told ZDNET that one key issue is that organizations can't yet benefit from access to measurable, definable, and certifiable AI safety, meaning end-user security requirements are unlikely to be met by many of their providers. "We're not at the point with AI that we can say, 'Does it have a seatbelt? Will it survive a crash at 25 miles an hour?'" he said. Also: 10 ways AI can inflict unprecedented damage in 2026 Lovelock likened the current state of AI safety to the rise of jaywalking in the 1920s, when the nascent auto industry lobbied government agencies to pass new laws. "We changed the responsibility from someone who was expressing their right of way and was a victim of the accident to somebody who ought to have known better and actually caused the accident," he said. "AI jaywalking is the attempt to do the same thing -- it's an attempt to ensure that the jay is responsible for anything that goes right or wrong with their use of AI." In short, current vendor agreements will likely make end users responsible for AI safety, not the technology provider, and professionals must be aware of the position. "Acceptance of this situation is crucial," he said. "We've seen this trend with other technologies. It's not new, in a sense, but it is a reality with AI, so at least be aware." Jeff Love, CTO at the Professional Rodeo Cowboys Association (PRCA), recently explained to ZDNET how his organization, which has close to 100 years of history, used AI to overcome its intractable legacy IT challenge. When gen AI models failed to penetrate older code, Love turned to Zencoder, an agentic platform that analyzes business logic and translates it into plain-English explanations. After embracing emerging technology, Love told ZDNET that his team can now use AI as part of its processes to snuff out potential security issues -- and he encouraged other professionals to look for similar opportunities. "When we have issues come up, or even as we're putting out new code, we can say, 'You know what? Check this for security issues. Check this for bad logic,'" he said. "The AI is better at doing that work than a human is because it considers the complete overview. We're just so honed into specific areas we can't see the big picture all the time." Love said AI can also help his team to consider issues they might otherwise have neglected. "It's always checking to see if there are security risks. And there are times that I've put out some code, and it says, 'You know what, this could be a little bit better,'" he said. "In today's world, you've got to be concerned about the security risks."
[2]
Open-Source CyberStrikeAI Deployed in AI-Driven FortiGate Attacks Across 55 Countries
The threat actor behind the recently disclosed artificial intelligence (AI)-assisted campaign targeting Fortinet FortiGate appliances leveraged an open-source, AI-native security testing platform called CyberStrikeAI to execute the attacks. The new findings come from Team Cymru, which detected its use following an analysis of the IP address ("212.11.64[.]250") that was used by the suspected Russian-speaking threat actor to conduct automated mass scanning for vulnerable appliances. CyberStrikeAI is an "open-source artificial intelligence (AI) offensive security tool (OST) developed by a China-based developer who we assess has some ties to the Chinese government," security researcher Will Thomas (aka @BushidoToken) said. Details of the AI-powered activity came to light last month when Amazon Threat Intelligence said it detected the unknown attacker systematically targeting FortiGate devices using generative artificial intelligence (AI) services like Anthropic Claude and DeepSeek, compromising over 600 appliances in 55 countries. According to the description in its GitHub repository, CyberStrikeAI is built in Go and integrates more than 100 security tools to enable vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualization. It's maintained by a Chinese developer who goes by the online alias Ed1s0nZ. Team Cymru said it observed 21 unique IP addresses running CyberStrikeAI between January 20 and February 26, 2026, with servers primarily hosted in China, Singapore, and Hong Kong. Additional servers related to the tool have been detected in the U.S., Japan, and Switzerland. The Ed1s0nZ account, besides hosting CyberStrikeAI, has published several other tools that demonstrate their interest in exploitation and jailbreaking AI models - "Further, Ed1s0nZ's GitHub activities indicate they interact with organisations that support potentially Chinese government state-sponsored cyber operations," Thomas said. "This includes Chinese private sector firms that have known ties to the Chinese Ministry of State Security (MSS)." One such company the developer has interacted with is Knownsec 404, a Chinese security vendor that suffered a major leak of more than 12,000 internal documents late last year, exposing the firm's employee data, government clientele, hacking tools, large volumes of stolen data such as South Korean call logs and information related to Taiwan's critical infrastructure organizations, and the inner workings of ongoing cyber operations targeting other countries. "Ostensibly, KnownSec appeared to be just another security company, but this is only a half truth," DomainTools noted in an analysis published this January, describing it as a "state-aligned cyber contractor" capable of supporting Chinese national security, intelligence, and military objectives. "In reality, [...] it has a shadow organization that works for the PLA, MSS, and the organs of the Chinese security state. This leak exposes a company that operates far beyond the role of a typical cybersecurity vendor. Tools like ZoomEye and the Critical Infrastructure Target Library give China a global reconnaissance system that catalogs millions of foreign IPs, domains, and organizations mapped by sector, geography, and strategic value." Ed1s0nZ has also been observed making active modifications to a README.md file located in an eponymous repository, removing references to them having been honored with the Level 2 Contribution Award to the China National Vulnerability Database of Information Security (CNNVD). The developer has also claimed that "everything shared here is purely for research and learning." According to research published by Bitsight last month, China maintains two different vulnerability databases: CNNVD and the Chinese National Vulnerability Database (CNVD). While CNNVD is overseen by the Ministry of State Security, CNVD is controlled by CNCERT. Previous findings from Recorded Future have revealed that CNNVD takes longer to publish vulnerabilities with higher CVSS scores than vulnerabilities with lower ones. "The developer's recent attempt to scrub references to the CNNVD from their GitHub profile points to an active effort to obscure these state ties, likely to protect the tool's operational viability as its popularity grows," Thomas said. "The adoption of CyberStrikeAI is poised to accelerate, representing a concerning evolution in the proliferation of AI-augmented offensive security tools."
[3]
CyberStrikeAI tool adopted by hackers for AI-powered attacks
Researchers warn that a newly identified open-source AI security testing platform called CyberStrikeAI was used by the same threat actor behind a recent campaign that breached hundreds of Fortinet FortiGate firewalls. Last month, BleepingComputer reported on an AI-assisted hacking operation that compromised more than 500 FortiGate devices in five weeks. The threat actor behind this campaign used multiple servers, including a web server at 212.11.64[.]250. In a new report, Senior Threat Intel Advisor for Team Cymru, Will Thomas (aka BushidoToken), says that the same IP address was observed running the relatively new CyberStrikeAI AI-powered security testing platform. Analyzing NetFlow data, Team Cymru identified a "CyberStrikeAI" service banner running on port 8080 on 212.11.64[.]250 and saw network communications between that IP and Fortinet FortiGate devices the threat actor targeted. The FortiGate campaign infrastructure was last seen running CyberStrikeAI on January 30, 2026. CyberStrikeAI's GitHub repository describes itself as an "AI-native security testing platform built in Go" that integrates over 100 security tools, an intelligent orchestration engine, predefined security roles, and a skills system. "Through native MCP protocol and AI agents, it enables end-to-end automation from conversational commands to vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualization -- delivering an auditable, traceable, and collaborative testing environment for security teams," reads the project description. The tool includes an AI decision engine compatible with models such as GPT, Claude, and DeepSeek, a password-protected web UI with audit logging and SQLite persistence, and a dashboard for vulnerability management, task orchestration, and attack-chain visualization. Its tooling allows it to conduct a full attack chain, including network scanning (nmap, masscan), web and application testing (sqlmap, nikto, gobuster), exploitation frameworks (metasploit, pwntools), password cracking tools (hashcat, john), and post-exploitation frameworks (mimikatz, bloodhound, impacket). By combining these tools with AI agents and an orchestrator, CyberStrikeAI enables operators, even low-skilled ones, to automate attacks against targets. Team Cymru warns that AI-native orchestration engines like this could accelerate automated targeting of exposed edge devices, including firewalls and VPN appliances. The researchers say they observed 21 unique IP addresses running CyberStrikeAI between January 20 and February 26, 2026, with servers primarily hosted in China, Singapore, and Hong Kong. Additional infrastructure was spotted in the United States, Japan, and Europe. "As adversaries increasingly embrace AI-native orchestration engines, we expect to see a rise in automated, AI-driven targeting of vulnerable edge devices, similar to the observed reconnaissance and targeting of Fortinet FortiGate appliances," explains Thomas. "In the near future, defenders must be prepared for an environment where tools like CyberStrikeAI, alongside the developer's other AI-assisted privilege escalation projects like PrivHunterAI and InfiltrateX, significantly lower the barrier to entry for complex network exploitation." The researchers also examined the profile of the CyberStrikeAI developer, who goes by the alias "Ed1s0nZ." Based on public repositories linked to the account, the developer has worked on additional AI-assisted security tools, including PrivHunterAI, which uses AI models to detect privilege escalation vulnerabilities, and InfiltrateX, a privilege escalation scanning tool. According to Team Cymru, the developer's GitHub activity shows interactions with organizations previously linked to Chinese government-affiliated cyber operations. In December 2025, the developer shared CyberStrikeAI with Knownsec 404's "Starlink Project." Knownsec is a Chinese cybersecurity firm with alleged links to the Chinese government. On January 5, 2026, the developer mentioned receiving a "CNNVD 2024 Vulnerability Reward Program - Level 2 Contribution Award" on their GitHub profile. The China National Vulnerability Database (CNNVD) is believed to be operated by China's intelligence community, which allegedly uses it to identify vulnerabilities for its operations. Team Cymru says the reference to CNNVD was later removed from the developer's profile. However, as the developer appears to be Chinese, it would not be unusual for them to interact with well-known cybersecurity organizations in their country. These new AI-powered cybersecurity tools continue to demonstrate how commercial AI services are increasingly used by threat actors to automate their attacks while, at the same time, lowering the barrier to entry. Last month, Google also reported that threat actors are abusing Gemini AI across all stages of cyberattacks, empowering the abilities of threat actors of all skill levels.
[4]
Why enterprise AI agents could become the ultimate insider threat
Agent sprawl could mirror the VM explosion era.Excessive agent agency increases breach blast radius.Treat AI agents like employees with credentials. Ever since October, I've been happily vibe-coding a series of apps using Claude Code. Every so often, I would give them an instruction, and they would go off and do my bidding. It was a comfortable collaboration. I could see everything the AI was doing, and I could produce new code at a pace far faster than ever before. But then Anthropic updated its language model. The key feature was Claude's ability to launch subordinate agents that could simultaneously work on different parts of the problem and communicate with each other. In theory, this was a big technical advance. In theory. My entire experience changed. Suddenly, Claude was kicking off four, five, six, seven, even eight agents at once. I had no visibility into what they were all doing. I didn't even have a way to stop them if one or more ran amok. And run amok they sure did. One got stuck trying to access a file for which it didn't have root privileges. Another went in and attempted to refactor an entire app (which I did not request). That agent failed partway through the process, leaving inconsistent naming conventions and conflicting object declarations throughout the code. Efficiently and cheerfully, it fully destroyed my app. Fortunately, I had source control check-ins and backups, so I was able to recover. I also instigated a protocol forbidding Claude from launching parallel, simultaneous agents. The potential for damage was just too great. So that was me. I'm a lone developer working on fairly low-priority apps as a side project. And still, rogue agents launched by the AI nuked my project. Now, scale that up to enterprise size. Instead of seven or eight rogue agents ruining the source code for some side project, those agents are running loose through your entire IT system, many with the credentials and access to spend money, hack databases, modify files, and initiate and respond to communications on your company's behalf. Let's go down a laundry list of examples of where AI has gone wrong in companies and agencies. As far back as 2022, an AI chatbot promised an Air Canada customer a discount that wasn't really available. The customer sued, and won. The company contended that the AI was at fault, but the court determined that the AI was representing the company. In 2025, an AI hiring bot exposed personal information from millions of people who applied for McDonald's jobs. Apparently, the AI company running the bot used the password 123456. Last year, security researchers showed that a prompt-injection attack (where a malicious prompt is fed to an AI) exposed Salesforce's CRM platform to the potential of data theft. Fortunately, this hack was never carried out (or at least nobody has reported it), and instead the researchers used news of it as a way to promote their company's skills. Also, last year, a vulnerability was discovered in the ServiceNow AI Platform that could allow an unauthenticated user to impersonate another user and perform any operations the authenticated user could. According to the researcher who discovered the vulnerability, "the attacker can remotely drive privileged agentic workflows as any user." Another vulnerability was found in Amazon Q's VS Code extension. Amazon Q is Amazon's generative AI assistant, sold as a SaaS resource as part of the company's extensive AWS offerings. Last year, a GitHub token error enabled a threat actor to push and commit malicious code directly to the extensions' open source repository, which would then be downloaded to any Q user's development environment. The only thing that prevented this from being a total disaster was a syntax error that kept the hack from running properly. OpenAI was excited about using its Codex AI to write its Codex code-writing tool. But in late 2025, researchers discovered a vulnerability in OpenAI's Codex CLI coding agent that could allow attackers to execute malicious commands on a developer's machine. By embedding harmful instructions in project configuration files within shared repositories, an attacker could trigger the tool to run those commands locally when a developer uses it. That local compromise could expose credentials, alter source code, or enable unauthorized changes to downstream systems. The result would be turning an AI coding assistant into a potential entry point for broader enterprise intrusion. Perhaps the best example of where rogue AI agents will go in the near future is from an unsourced hearsay example cited by cybersecurity company Stellar Cyber. They describe a "real-world example" from just this year. Documented as part of their list of top agentic AI security threats, "A manufacturing company's procurement agent was manipulated over three weeks through seemingly helpful clarifications about purchase authorization limits. By the time the attack was complete, the agent believed it could approve any purchase under $500,000 without human review. The attacker then placed $5 million in false purchase orders across 10 separate transactions." One of my more recent jobs was to scare the pants off generals and admirals about cybersecurity. These were people who commanded brigades of tanks and fleets of warships. I had to explain to them how a simple thumb drive with a virus could cause more harm than an APFSDS (Armor-Piercing Fin-Stabilized Discarding Sabot) round shot from a M256 120mm smoothbore cannon on an M1A2 Abrams tank or a TLAM-E Block IV tactical Tomahawk missile containing the Unitary High-Explosive (WDU-36/B) 1,000-pound warhead fired from an Arleigh Burke-class destroyer. I found that nothing drove home the need for cybersecurity more than some well-chosen statistics. As we enter the AI era of cybersecurity, I'll share some statistics with you. I managed to destroy the sleep of an entire generation of military leaders. Let's see if you sleep any better after this. We'll kick it off with 82 to 1. CyberArk is a division of Palo Alto Networks. In its recently released 2025 Identity Security Landscape survey of security professionals, it discovered that machine identities outnumber human identities by 82 to 1. This is basically a measure of how many users have logins, whether those users are people or software. The term "machine identity" can encompass everything from basic scripts to AI agents. But the fact is that, in enterprises, there is a whole lot of software running around with unfettered access to the crown jewels. Here's another fun stat, and this time I'll quote directly from the study: "Organizations now report that 72% of employees regularly use AI tools on the job -- yet 68% of respondents still lack identity security controls for these technologies." Gartner says that less than 5% of enterprise apps used task-specific AI agents in 2025. In 2026, that number will increase 800%. The analyst company estimates that more than 40% of enterprise apps will use AI agents in 2026. According to data security firm BigID, only 6% of organizations have an advanced AI security strategy. In a LinkedIn post, IDC researcher Bjoern Stengel says that only 22% of organizations are governing AI use through a central governance or ethics board. He says that 43% manage AI, "Only through disconnected efforts or do not have an established responsible AI governance process in place." In a late 2025 survey of C-suite leaders, EY (Ernst & Young) reported that 99% of companies experienced financial losses from AI-related risks, with 64% exceeding losses of $1 million. On average, the companies experienced losses of $4.4 million, and across their entire 975-company survey space, AI-related losses added up to $4.3 billion. Bottom line: We are not prepared. OWASP stands for the Open Worldwide Application Security Project. It's a nonprofit that focuses on improving software security. In late 2025, it published a study documenting "the most critical security risks facing autonomous and agentic AI systems." Here's a quick rundown. As you can see, there are many entry points for malicious actors to gain a hold on supposedly secure internal AI agents. Back when I spent most of my time giving cybersecurity lectures, insider threats accounted for a measurable portion of enterprise cybersecurity risk. Before the pandemic, Ponemon's 2018 Cost of Insider Threats report found that 64% of insider incidents were caused by employee or contractor negligence, with criminal or malicious insiders accounting for 23% and credential theft for 13%. Verizon's 2019 Data Breach Investigations Report (DBIR) reported that 34% of breaches involved internal actors, demonstrating that insider involvement was a persistent component of breach activity. During the 2020-2022 pandemic years, remote and hybrid work expanded the exposure surface for insider risk. The 2022 Ponemon report categorized incidents as 56% negligence, 26% criminal insiders, and 18% credential theft, showing that negligence remained the dominant category while credential-based compromise increased in share compared to 2018. As of 2025, Verizon's DBIR began exploring the use of generative AI within enterprises. Their study found that 15% of employees routinely accessed generative AI systems on corporate devices. Of those accounts, 72% used non-corporate email identifiers and 17% used corporate email addresses without integrated authentication. Essentially, employees were dumping internal company confidential data into cloud-based public AI systems like ChatGPT. All that brings us to 2026. Now, insider threats are moving from mostly human-motivated to the possibility that agents themselves could become malicious insider actors. In an article published in The Register, Palo Alto Networks chief security intel officer Wendi Whitmore is quoted as saying, "the AI agent itself becoming the new insider threat." This makes sense because AI agents are being given greater and greater access inside corporate networks as a side effect of enabling them to do the jobs we're delegating to them. The problem is not only that many of these agents will need to have expanded privileges within the network, it's that they also become "a very attractive target to attack." These agents, running 24/7 inside your network, with expanded privileges and capabilities, are subject to all of the risks and threats I discussed in the previous section. Now, let's take this to its logical extreme. Insider threats from humans have mostly been associated with negligence. But there are only so many humans in the company. Now, let's look at those same humans fielding agents, and the idea that there are 82 machine identities to every human one, and you can see how negligence can be multiplied in the extreme. Add to that malicious threats that can now be targeted beyond humans to agents with potentially limited protection capabilities, and we are, in a word, screwed. The OWASP study does provide some insight into how we might protect our networks. It lists 10 mitigation strategies that, when used together, can harden agent operations inside the corporate network. Here's a quick summary of those strategies. All of these tactics make sense and should be integrated into your internal AI strategy. But I'll tell you one tactic that OWASP doesn't specifically recommend: limit your agent exposure. Just don't create as many agents as you might want to. Remember the rise in virtual machines back in the day? All of a sudden, we had virtual machines everywhere because every application, project, and challenge was addressed by spinning up a new VM. Eventually, we had so many virtual machines that it was impossible to find them all. Many of them were running with outdated software. It was a mess. Agents promise to be just as chaotic. Think twice before you create a new agent. Perhaps require human approvals before launching one. If it takes a team of interviews and multiple rounds before you hire an employee, it should take the same or even a greater level of care before you "hire" a new agent. This could be difficult. As I showed at the beginning of this article, agents like to create new agents. But this is the crux of the battle we face over the next few years. It's not just malicious actors. It's all the unintentional and even well-meaning messes we'll create simply by trying to make our jobs easier and offloading some work to the machines.
[5]
AI threats will get worse: 6 ways to match the tenacity of your digital adversaries
Experts recommend best practices for avoiding catastrophe. When was the last time you wondered if that mysterious phone caller who hung up after you answered "hello" made a recording of your voice in a way that could be used against you? The FCC warned us about such scams nearly a decade ago, before artificial intelligence was even on the scene. Now -- with AI cloning your voice and conversational tone from as little as three seconds of audio -- the stakes are much higher. Whether used for legitimate or nefarious purposes, AI's chief selling proposition has been its knack for speed and scale. In the hands of a threat actor, a lot of damage can be done in the blink of an eye. And it's getting worse. Your only meaningful response is to match your adversaries' tenacity. In this article, we'll review the growing threats and best practices you can use to protect yourself and your organization. In its January 2025 report on the Adversarial Misuse of Generative AI, Google's Threat Intelligence Group (GTIG) reported that threat actor reliance on Google Gemini was mostly contained to run-of-the-mill productivity use cases. "Rather than engineering tailored prompts, threat actors used more basic measures or publicly available jailbreak prompts in unsuccessful attempts to bypass Gemini's safety controls," said the post's authors. "Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities. At present, they primarily use AI for research, troubleshooting code, and creating and localizing content." In a November 2025 post, GTIG noted significant advancements in the AI-related tactics, techniques, and procedures (TTPs) used by threat actors: "Adversaries are no longer leveraging artificial intelligence just for productivity gains; they are deploying novel AI-enabled malware in active operations. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution." Last year, Anthropic published a similar post on detecting and preventing malicious use of its Claude LLM. "The most novel case of misuse detected was a professional 'influence-as-a-service' operation showcasing a distinct evolution in how certain actors are leveraging LLMs for influence operation campaigns," wrote the report's authors. "What is especially novel is that this operation used Claude not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users." Echoing Google's concerns about AI-assisted malware development, Anthropic's report added, "We have also observed cases of credential stuffing operations, recruitment fraud campaigns, and a novice actor using AI to enhance their technical capabilities for malware generation beyond their skill level." Perhaps the most concerning of all the evolving threats at this very moment is the increasingly convincing nature of deepfake videos, images, and audio, and the opportunities they create for mistaken identities. However, as evidenced by ByteDance's February 2025 Seedance 2.0 launch, an incredibly convincing scene of Tom Cruise fighting Brad Pitt that was made with Seedance (and some very swift backlash from the entertainment industry), the very latest video generation models are making it increasingly more difficult, if not impossible, to spot deepfakes. According to LastPass' director of AI innovation Alex Cox, Seedance represents a concerning inflection point in the overall evolution of deepfake video production tools and their potential for wrongdoing. "AI can produce content that is almost indistinguishable, if not completely indistinguishable, from real human activity," he told ZDNET. "We've gotten to the point of multimodal AI capabilities that most forms of online human interaction can be believably faked by AI. Written interaction is still the absolute strong point. But video and audio are rapidly approaching similar levels." Cox predicts that AI-powered video and audio tools will evolve to the point where we can be pretty easily tricked into believing we're dealing with an authentic person -- even in a video meeting -- when, in reality, it's a deepfake. "[Then], add the concept of virtual avatars and real-world translation. Imagine an attacker researches common public figures in your organization and creates a virtual avatar that not only looks and sounds like the public figure, but presents from places that they commonly present from in public footage," said Cox. "The language and behavioral indicators (tics, sayings, etc.) they use could also be modeled, so the attacker could essentially 'become them' in a meeting." While the current state of the tools might not be equipped to handle the real-time nature of meetings, Cox thinks it won't be long before they are. "Right now, AI tech can't do this," he said. "There is still latency and artifacts involved that give people that uncanny 'valley' feeling. But we are rapidly approaching parity in this area." Meanwhile, when it comes to static mediums like text and still images, and the opportunity to mislead users and other audiences, we've already been there. Two years ago, Futurism.com reported how Sports Illustrated and TheStreet were publishing articles by fake AI-generated authors. The incident caused reputational damage to the two media brands and ultimately resulted in the dismissal of two top executives at the brands' parent company. While these weren't AI-assisted crimes committed by cybercriminals, they speak to the increasing frequency of purposeful deceit and the greater likelihood that many of the online identities we encounter could be inauthentic, supporting a range of highly questionable motives. Once we've been successfully baited by deepfake imagery or identities, the question is: What's the damage? Will a voice clone trick you into sending money to a scammer? Will you get phished for the credentials to your most sensitive accounts? Will one of your devices end up with malware that eventually finds its way into your employer's systems? Will misconfigured defensive mechanisms enable malicious AI agents to roam freely behind your organization's firewall? Don't wait for an AI-enabled attack before taking action. Experts warn that it's time for individuals, IT professionals, and organizations alike to evolve their best practices and vigilance accordingly in order to reduce the likelihood of a catastrophic event at the hands of AI-equipped cybercriminals or ill-intentioned nation-states. Here, in no particular order, are six ways to improve your odds. Stay fanatically educated on AI safety and security, and aware of the evolving threats. Get to know the risks and evolving threat landscape.Pay close attention to the most important sources of information, such as the threat intelligence and AI safety groups at frontier AI developers Anthropic, Google DeepMind, and OpenAI. Set up your feeds to become aware of new information as it is made available by various reputable cybersecurity and threat intelligence sources, including GTIG, AppOmni, the US Cybersecurity and Infrastructure Security Agency (CISA), the OWASP LLM Top Ten, the AI incident Database, and the emerging AI-related techniques as they appear in the Mitre ATT&CK Matrix, like adversary acquisition of AI capabilities (as well as the non-AI related techniques). Be as aggressive about moving to non-phishable passwordless credentials, including passkeys and number-matching-based multifactor authentication. The majority of successful attacks start with some form of phishing or vishing (the voice-based equivalent of phishing). With the help of AI and voice cloning, phishing and vishing attacks will become more convincing. The sooner you and your company make the move to non-phishable credentials, the better. For example, don't wait to opt in to non-phishable credentials for those online services that support them. When it comes to your organization's identity and credential management, insist that it eliminate phishable credentials sooner rather than later. Passwords are easily phished (or vished). One-time passwords (OTPs) of the sort emailed, sent via SMS, or generated by your authentication app are problematic as well. Before making the move to agentic AI, ensure you have a way to identify every legitimate agent within your control or your organization's infrastructure. Vendors like Microsoft, Okta, and Ping Identity offer identity and access management (IAM) solutions that manage the identities of agents on your network, much as you manage human identities. Although agentic AI is likely to yield enormous productivity gains, legitimate agents are a target-rich environment for malicious agents (and there's no question that threat actors will rely on such "shadow agents" to do their bidding). Should one of your legitimate agents become compromised, time will be of the essence for tracking it down and deprovisioning it. But if those agents are roaming free without a shred of management, good luck containing the damage from an agentic attack. Employ a zero-trust strategy wherever possible. Yes, certain people, organizations (e.g., partners), processes, and even AI agents will need access to various resources and systems of record in order to execute their responsibilities. But always start them out with a few or even no privileges to see what breaks. It's a jungle out there. Danger lurks under every rock and behind every tree. A little bit of friction can help. Minimally escalate privileges where that friction presents serious obstacles to the business. Trust should be earned. Not the default. Get smarter about your OAuth token exposure: You may not know it, but you've likely issued one or more OAuth tokens that allow one service to access another on your behalf. For example, if your Spotify account is set up to automatically post to Instagram about the songs you're listening to, you've essentially instructed Instagram to grant Spotify an OAuth token. Such delegations of authority are expected to multiply by several orders of magnitude once agentic AI takes off, and all those agents will need access to a wide range of services. But here's the question: Do you know all the OAuth tokens you've issued? And for those that you do, do you know how to revoke them? For a long time, we had the luxury of not caring too much about this potential exposure. But those days are now officially over as OAuth tokens are some of the most coveted credentials that a threat actor can acquire. Become a skeptic if you aren't one already. As it becomes increasingly difficult to tell the difference between legitimate content and deepfakes, now is the time to become less trusting of everything you see or hear online. Just last week, within hours of former US Secretary of State Hillary Clinton testifying that she never met Jeffrey Epstein, deepfake photos portraying her in his presence went viral across social media in an attempt to discredit her testimony. As tools for producing deepfake imagery and audio continue to evolve, threat actors will begin to rely on them in unanticipated ways. For example, in a highly targeted attack, you might receive a video or voice message from your boss or CEO to take an action you otherwise wouldn't. Err on the side of caution and always double-check the authenticity of such messages. When considering these and other ways to uplevel your so-called "sec-op" -- your operational security -- put yourself in the shoes of your adversaries, given what AI can do now and in the not-too-distant future. If you were your adversary, you would likely exhaust every AI option that exists to achieve your objective. And the better AI gets at helping you to achieve that malicious objective, the more defenseless your victims will seem. It's like bringing a gun to a knife fight. If you know that in advance -- which you now do -- would you leave the outcome to chance or would you rise to the challenge? Do you even have a choice? Oh, and about those mysterious callers who wait for you to say "yes" or "hello" and then hang up, perhaps you should consider not answering calls from unknown numbers (or, at a minimum, waiting for the caller to speak first). It's a bitter pill to swallow (and admittedly, impractical in certain scenarios). But then again, keep the idea of zero trust in mind. If the call is that important (and legitimate), they'll figure out how to get in touch.
[6]
AI Tools Are Supercharging Hackers
Can't-miss innovations from the bleeding edge of science and tech It's no secret that AI models have come a long way, from tools that can complete high school students' homework to "vibe coding" assistants that can build entire apps in a fraction of the time it would take for human developers. But besides cheating at school and terrifying employees who fear they're about to be laid off, AI can also be used for evil. "Vibe hacking," the evil twin of "vibe coding," has quickly turned into a cybersecurity nightmare, with AI systems topping several hacking-related bug bounty leaderboards. Case in point, just last week, a hacker used a jailbroken version of Anthropic's Claude chatbot to find vulnerabilities in Mexican government networks and successfully automate the theft of highly sensitive taxpayer and voter records, as Bloomberg reports. With the help of AI, the hacker stole 150 gigabytes of government data related to 195 million taxpayers. In a report about the latest hack in Mexico, cybersecurity startup Gambit Security said the perpetrator likely wasn't associated with any specific group or foreign adversary government. Researchers also told Bloomberg that they found at least 20 specific vulnerabilities being exploited. In other words, AI means the barrier to entry for real-deal hacking has never been lower. Last month, Amazon's security research team revealed that hackers -- or perhaps just one -- had broken into more than 600 firewall systems across dozens of countries while armed with commercially available AI tools, overpowering weak security measures, and extracting credential databases, and possibly setting the stage for future ransomware deployment. "It's like an AI-powered assembly line for cybercrime, helping less skilled workers produce at scale," said Amazon security engineering and operations lead CJ Moses in a statement. The exploits are part of a much broader trend, as AI supercharges cybersecurity attacks, from deepfake footage luring victims into phishing traps to AI-enabled password cracking. A new report by IBM found that there was a 44 percent year-over-year increase in the "exploitation of public-facing software or system applications" and a nearly 50 percent uptick in "active ransomware groups." "Attackers aren't reinventing playbooks, they're speeding them up with AI," said IBM global managing partner for cybersecurity services Mark Hughes in a statement. "The core issue is the same: businesses are overwhelmed by software vulnerabilities. The difference now is speed." Google security researchers also noted in a report earlier this year that a "pitched battle" between threat actors accessing the "same classes of powerful AI models and automated processes as their targets" is about to "change in significant and unpredictable ways." "If [AI is] weaponized in a ransomware toolkit and sold on the underground, the rates of incidents may increase," said Google vice president of security engineering Heather Adkins in a statement. "But if it's closely held by a threat actor with really specific targeting, we may not even be able to tell that there's a fully automated platform on the other end. We may only know when it's physically in someone's hand."
[7]
Will AI make cybersecurity obsolete or is Silicon Valley confabulating again?
Can you trust the companies that are building AI to make the technology safe for the world to use? That is one of the most pressing questions you face this year as a user of AI, and it is not an academic question. As real-world deployments of the technology proliferate, novel kinds of risks are emerging with potentially catastrophic impact, demanding fresh solutions. Also: 10 ways AI can inflict unprecedented damage in 2026 To the rescue come the major creators of AI models, OpenAI, Anthropic, and Google. All three offer tools that could mitigate failures and security breaches in LLMs and the agentic programs built on top of them. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Wall Street observers think there is a real possibility that AI firms' tools will displace the traditional cybersecurity offerings from companies such as Palo Alto Networks, Zscaler, and Check Point Software. A related field, called observability, is also threatened, including firms such as Dynatrace that sell tools to detect system failures. The notion that most or all of the world's software problems will be solved by software creators at the source, before programs enter the wild, is indeed tantalizing. No more denials of service, no more ransomware, no more supply chain attacks if you get it right from the start. Only, it's not that simple. The challenge is greater than the potential achievements of any tool or approach. The risks of software, including AI models and agents, are too broad in scope for those companies to resolve on their own. It will take all of the traditional security and observability tools to fix what ails AI. It will also take novel forms of data engineering. In fact, the solution may even require the fundamental redesign of AI programs themselves to address the root causes of risk. The stocks of cybersecurity firms were shaken recently when Anthropic unveiled Claude Code Security, an extension of its popular Claude Code tool that can automate some code writing. Anthropic said Claude Code Security will allow "teams to find and fix security issues that traditional methods often miss," with a dashboard that displays potential issues and proposes patches to address the issues. The intent is that a human analyst reviews the findings and proposals to make the final decision. Claude Code Security is "available in a limited research preview." The result of over a year of cybersecurity research, Claude Code Security does not merely police code made with Claude Code. Anthropic has used the tool to find hundreds of vulnerabilities "that had gone undetected for decades, despite years of expert review." Likewise, OpenAI in October unveiled Aardvark, what the firm calls an "agentic security researcher powered by GPT‑5." In private beta at the moment, Aardvark undertakes the same kind of automatic code scanning as that promised by Anthropic. "Aardvark works by monitoring commits and changes to codebases, identifying vulnerabilities, how they might be exploited, and proposing fixes," said OpenAI. Three weeks before Aardvark's launch, Google's DeepMind research unit unveiled CodeMender, which the firm called "a new AI-powered agent that improves code security automatically." Like Anthropic's tool, CodeMender is meant not simply to secure Google creations but to be a broad security tool. In six months of development, DeepMind noted, CodeMender had "already upstreamed 72 security fixes to open-source projects, including some as large as 4.5 million lines of code." Unlike Anthropic and OpenAI, DeepMind emphasizes not only proposing fixes but also automatically applying fixes to code. So far, the program is only being used by DeepMind researchers. DeepMind emphasized that "Currently, all patches generated by CodeMender are reviewed by human researchers before they're submitted upstream." All three offerings, most observers agree, immediately threaten the role of tools in categories such as 'AppSec,' 'Software Composition Analysis,' and 'Static Application Security Testing.' That capability covers companies and tools such as Snyk, Jfrog, Mend, GitHub Dependabot, Semgrep, Sonatype, Checkmarx, and Veracode. Claude Code Security's introduction "drove renewed weakness across high-growth software names, particularly in observability and cloud security," wrote William Power, a software analyst with investment firm R.W. Baird & Co. It's reasonable to assume that, as Anthropic, OpenAI, and DeepMind emphasize, you will probably want to work with tools that are coming from the same vendors who are building the code that is proliferating the LLM-based software that will increasingly displace traditional packaged applications. The technology has the added appeal that it's integrated into these companies' coding platforms. Claude Code Security and Aardvark are already integrated, in preview form, into the Claude Code and OpenAI Codex tools. While CodeMender is still a research project, it's clear that at some point it could be part of Google's AI Studio development tool for Gemini, Imagen, and its other models. However useful those tools prove themselves, cybersecurity is too broad a field, and the problem is too great in scope and too profound in its root causes, for code-scanning tools to make AI outputs safe. Within the realm of scanning source code, analyzing issues, and patching or redesigning, the problem is larger than a single piece of source code. Modern software is known in the field as an "artifact," a composition of numerous files from many sources. A given program includes libraries, frameworks, and other elements that must all perform reliably together. In a recent blog post, JFrog's CTO and co-founder, Yoav Landman, explained that, "Code is no longer the final product. It is an intermediate step. The real output -- the thing that gets shipped, deployed, and executed -- is a binary artifact: A container image. A package. A library. A compiled release." Within the broader realm of technology, scanning and fixing code is a small portion of what cybersecurity firms, such as Palo Alto, Zscaler, and Check Point, do, or what Dynatrace, Splunk, and Datadog do in observability. Firewalls exist at a more basic level than as an application that secures the perimeter of a computer network. Their role is to keep out bad actors before they can get near vulnerable code. So-called endpoint security tools similarly ensure that compromised host computers do not become launch pads for attack. Meanwhile, a "Secure Access Service Edge" tool is cloud-based software that identifies and authenticates users on a network so that only the right parties interact with programs. None of those issues is resolved by having less buggy source code. Tools such as "Security Information and Event Management" (SIEM) sit above the network and the apps. These tools tell a security professional what is happening across a computer fleet in real time. While it is nice to fix code before it ships, SIEM does things that scanning code will never do. The tool shows things as they develop that demand urgent attention because they're already causing issues. If the code is buggy, it can wait, and probably should wait. When something potentially catastrophic is happening across an entire computer network, time is of the essence. Also: AI is quietly poisoning itself and pushing models toward collapse - but there's a cure The companies selling SIEM, such as Palo Alto and Zscaler, are employing AI to speed up the work that security professionals do. However, software won't replace the "throat to choke" when things are going wrong. Security vendors exist because they have people who pick up the phone in the middle of the night and work against the clock to find and fix issues that are larger than a single piece of bad code. Anthropic and OpenAI are not generally known for picking up the phone, although Google's Cloud unit can offer its own security operations as an additional hand. On a more profound level, recent research has shown that the frontier of AI, the agentic systems, are themselves plagued with potentially catastrophic engineering and design faults. Researchers at MIT last week explained that numerous commercially shipping AI agent systems lack such basic features as published security audits or a means to shut down rogue agents. Also: AI agents are fast, loose, and out of control, MIT study finds Researchers led by Northeastern University recently revealed the results of extensive red-team efforts where multiple AI agents interoperate, mostly without a person in the loop. They found "chaos" ensued: bots trying to shut down other bots; bots that "shared" malicious code with one another to expand the "threat surface" of cyber risk; and bots that mutually reinforced bad security practices. One way to deal with that chaos is to build new AI training data sets gathered in the wild. Software and services firm Innodata is one vendor helping the giants of AI to do that. "The adversaries are extremely creative, and they're coming up with things which the models that have been trained in lab environments have never seen before," Jack Abuhoff, Innodata's CEO, told ZDNET. "What do you do about that? You need high-quality, semantically diverse, scalable adversarial attacks with which to stress-test the agents." Also: Destroyed servers and DoS attacks: What can happen when OpenClaw AI agents interact Because AI and agents have their own faults, one stock analyst at Barclays Bank who covers the cybersecurity vendors, Saket Kalia, mused recently, "If the code developer is offering the code security tool, is that like the fox guarding the hen house?" AI will inevitably be used to help fix code. The biggest contribution that Claude Code Security, Aardvark, and CodeMender can offer is not to magically solve cybersecurity, but to reduce the incredible number of avoidable software failures. In an article in the November issue of the scholarly journal IEEE Spectrum, titled "Trillions spent and big software projects are still failing," long-time software chronicler Robert N. Charette pointed out that $5.6 trillion is spent annually on IT, but "software success rates have not markedly improved in the past two decades." Even for AI, it's a grand challenge. As Charette wrote, "there are hard limits on what AI can bring to the table" to solve software engineering. "As software practitioners know, IT projects suffer from enough management hallucinations and delusions without AI adding to them."
[8]
Automating More Security Decisions Key To Keeping Up With AI Attacks: Experts
Many security decisions may need to be automated in a way that many organizations have thus far been uncomfortable with -- due to the risk of business disruption, experts tell CRN. While AI and agentic capabilities are transforming how cyber defense is done, it's widely recognized that the same is happening on the attacker side. What is less appreciated, according to some security experts, is the fact that defenders may need to accept a level of automation that previously would've been unthinkable. [Related: Top 6 Cybersecurity And AI Predictions For 2026] Many security decisions may need to be automated in a way that many organizations have thus far been uncomfortable with -- due to the risk of business disruption, experts told CRN. "When you take action and you take down the CEO's email, you're worried about getting fired," said Paul Nguyen, co-founder and co-CEO of identity security startup Permiso. AI-powered attacks, however, are changing the trade-offs and may force security teams to adopt new calculus around automation in cyber defense. If attackers are increasingly operating at breakneck "machine speed," defensive decision-making simply can't remain at the pace of human thinking, according to experts. "Autonomous attacks don't change what attackers want -- they change how fast they get there," said Morgan Adamski, a principal and U.S. leader in the cyber, data and technology risk business at PricewaterhouseCoopers. This means both a substantially greater volume of cyberattacks -- as well as attacks that move much faster than in the past -- is just about inevitable going forward, experts said. The bottom line is that "there's no way human-powered response is going to keep up with machine-powered attacks," said Dov Yoran, co-founder and CEO of Command Zero, a startup offering an LLM-powered cyber investigation platform. New findings from CrowdStrike have revealed a massive acceleration in "breakout time," the time it takes for an attacker to move from one compromised host to another host. The cybersecurity giant's recently released 2026 Global Threat Report found that the average breakout time for cybercriminals dropped to 29 minutes in 2025 -- equating to a 65-percent faster speed for the attacks. Additionally, the fastest breakout time in 2025 was just 27 seconds, according to CrowdStrike's report. "That means defenders are facing an unbelievable amount of pressure," said Adam Meyers, senior vice president for counter adversary operations at CrowdStrike, during a recent briefing with media. "They have to deal with potential breach every 30 seconds. And so that is extensive work from their perspective." While security teams have perennially struggled with alert fatigue -- an overload of alerts from tools, many of which end up being false positives -- the acceleration of AI-driven attacks are certain to exacerbate the problem, according to security experts. While security teams are "drowning in alerts," Yoran said, AI is especially well-equipped for handling much of the "data drudgery" that takes up security analysts' time, Command Zero's Yoran said. The Security Operations Center (SOC) is without a doubt one of the first places to deploy AI and agentic capabilities for automating more security decision-making, according to BlackLake Security's Kurt Wagner. With the help of "agentic SOC" tools coming onto the market, "you're able to augment your SOC and automate a lot of the Level 1 and Level 2 work that's usually done by analysts," said Wagner, director of sales at Austin, Texas-based BlackLake, No. 311 on CRN's Solution Provider 500 for 2025. Going forward, many security teams are likely to have to confront a broader set of challenges in the age of AI-intensified attacks -- including cultural issues, experts said. "From a cultural standpoint, I think we also have to move to giving security a little more power -- to be able to say, 'Something really bad is happening. We have to be able to mitigate this now,'" said Permiso's Nguyen. At the very least, it will become entirely necessary to automate security responses to previously known threats, according to Gonen Fink, executive vice president of products for Cortex and cloud at Palo Alto Networks. "I think there's still customers that are hesitant to use this [technology] to make decisions on unknown threats," Fink said. However, with known threats, "you could go to a place which will be a completely autonomous, automated process -- and leave the humans to look at [threats] that are completely new," he said. Around the tech industry, many engineering teams are embracing AI to maximize their productivity to the largest possible degree, said Ian Ahl, CTO at Permiso. However, "I feel like on the defense side, we're hesitant to embrace some of the new technology ourselves to fight back in this," Ahl said. That may not be optional for much longer, though, according to security experts. In addition to enabling threat actors to accelerate and broaden their attacks, the rise of LLM-powered coding tools and "vibe coding" has meant a significant influx of new software -- and new vulnerabilities. "The massive amount of volume coming in terms of software projects is quite overwhelming," said Peter Girnus, senior threat researcher at Trend Micro's Zero Day Initiative. "I think the industry really has to figure out how to add that security piece between how these agents work -- between the models and the tool chains and the various components of the AI ecosystem." Responding to critical zero-day vulnerabilities with automated deployment of new patches may also be an area that organizations will have to more seriously consider than in the past, according to experts. Identity and access security issues, meanwhile, are another area that will need remediation more quickly than through a standard ticketing system, Nguyen said. Ultimately, "if the adversary is automating your attack, you have to be able to also automate the response," he said. "We have to change our risk appetite for the security team to be able to take mitigation action faster."
[9]
AI innovation itself introduces new attack vectors: Philippa Cogswell of Palo Alto Networks
Philippa Cogswell, Managing Partner, JAPAC, Unit 42, Palo Alto Networks AI has reduced the cost of launching sophisticated cyberattacks. Coordinating an attack, which earlier took weeks, can now be automated and executed in a matter of hours. This means that even attackers with limited skillsets and resources can use AI to iterate and launch attacks at scale. It explains the recent increase in the volume and velocity of cyberattacks. According to Palo Alto Networks' Global Incident Response Report 2026, the time taken by the fastest 25% of intrusions to reach the data exfiltration stage dropped to 72 minutes in 2025, down from 285 minutes in 2024. In an interaction with CXOtoday, Philippa Cogswell, Managing Partner, JAPAC, Unit 42, Palo Alto Networks, elaborates on the risks that enterprises face not just from AI-driven attacks but also from their own AI implementations. Cogswell also talks about the critical role AI can play in cybersecurity by doing much of the heavy lifting, while keeping humans in the loop to provide the essential organizational context. Edited excerpts: Q. AI has become a force multiplier for threat actors. But your report also shows that 90% of breaches were caused by preventable exposure and not attacker sophistication. Does that mean AI powered threat is a future concern and not an immediate one? It is an immediate concern based on what we have seen in the last few years. We saw a lot of threat actors experimenting with AI, but now we are actually seeing it more routinely used in a lot of cases that we are investigating. Last week, I was talking to my team about a case here in India that they were investigating, and they could tell by the soft and subtle language used during ransomware negotiation that an LLM was being used to help guide the negotiation from the threat actor's side. It's actually at the point where we are seeing it more frequently than what we have done. We're not necessarily aware of seeing it from an end-to-end perspective, but there are cases where we're seeing discreet use around things like reconnaissance or some sort of infrastructure setups. In the negotiation, it really is enabling them in a way it hasn't done before. Q. Is AI being used to develop sophisticated malware too? We are definitely seeing it being used for a malicious script purpose, but we are still not at the stage of being convinced that it is being used to develop sophisticated malware. I think there is a big difference between those two things, which is enabling some of the scripting, templating, and those sorts of things as opposed to sophisticated malware development. Q. Are Indian firms also seeing these new threats? We are seeing it everywhere. One of the things I would say is that even if we don't see exact trends happening in a specific country immediately, it is essential for any organization to be aware of global trends because that particular trend, tactic, or threat actor can be on your doorstep tomorrow. A good example of this is a threat group that we have been tracking for over two to three years now. It's called Muddled Libra, and initially, they were very heavily US-orientated in terms of their targeting. But over time, as they developed and evolved, they changed the industry sectors and geographies they were targeting. Last year, they were targeting the retail sector, and now they have flipped to the aviation sector. In recent months, we saw them targeting Marks & Spencer in the UK, and shortly after Qantas in Australia. There is a lot to learn from this global perspective. How do you use those trends to determine if there is a material shift in the threat landscape that requires them to rethink their defensive strategy. Q. Enterprises too are integrating AI everywhere. What challenges can they face because of AI? AI innovation itself introduces new attack vectors. It really depends on the type of technology you are adopting and your implementing strategy. Since, there are different types of AI, the risk is also influenced by the specific datasets you are feeding into the system. In terms of the attack vector, we have seen recent cases where threat actors inside an environment specifically target AI assistants. Most organizations today leverage AI assistants to automate common helpdesk queries and similar tasks. We investigated a case not too long ago where an insider used an AI assistant to gather information. They used it to find internal guides to understand the environment, and through this, they were actually able to launch a DDoS attack. To be honest, the individual probably wouldn't have had the knowledge to do so if they hadn't been able to ask the assistant those questions and receive such timely, actionable feedback. It's critical for companies to consider how they are monitoring these systems. While we are getting great use out of them, we must also keep in mind the security risks that come with them. We also have a lot of challenges coming our way because of data integrity inside organizations. If you think about the use of AI and your reliance on it, you need to know that the data used to train models has integrity to it. What happens if you can't rely on that anymore. That could potentially become a new avenue for extortion. Q. How can AI help in securing a digital landscape that is becoming increasingly complex? I think it's really about how AI can handle very large datasets because any enterprise environment is incredibly complex nowadays. Organizations have adopted several digital technologies, ranging from Cloud, 5G and IoT to AI. With this adoption and the level of connectivity we have now compared to five or ten years ago, we need to rethink how we can actually secure these environments. AI is really effective when we are dealing with these vast amounts of data. I remember being a security analyst many years ago, and it always felt like looking for a needle in a haystack. It was a challenging and daunting task. Palo Alto Networks talks a lot about platformization, where we want to cover these disparate environments and bring that information together, and let the technology do the front-end lifting for humans. The system then passes off only the specific incidents that require a person's attention. Effectively, we are talking about stitching all of those sources together. Because these environments are so complex, we need a better view so we can follow an identity all the way through the environment. We have seen all too often where an attacker moves across multiple different attack surfaces and unless we are able to bring all of those things together, we are simply going to miss it. Q. The cost of launching sophisticated attacks has reduced significantly because of AI. How can AI be used to flip this economic advantage and make it expensive for bad actors to carry out machine speed attacks? The good news is that in the same way a threat actor can leverage AI to carry out an attack, we as defenders have been using it as well. The reality for us is that to respond to the speed and scale that AI introduces, we need to be able to leverage AI to identify threats effectively. Many companies only realize there has been an incident when they are ransomed, or by the time a third party notifies them that their data has been exfiltrated. This is an opportunity to identify things much earlier at the point of compromise. We need more behavior analytics that can identify activity, as opposed to formally chasing signatures, a particular hash, or a particular IOC (indicator of compromise). We need to stop thinking that way. We won't be able to secure environments in that way. Instead, we should be leveraging technologies that give us a machine time response. Ultimately, that is the core objective of adopting AI from a defensive security standpoint. Q. Can human-in-the-loop become a liability if decisions are waiting to be approved by humans? I am still a firm believer that we can use AI in that sort of correlation to help with those vast sources. I am also very hopeful that we can use automation and AI to take away some of the heavy lifting that people are doing at the moment. But I still think there is always a place for humans, particularly in security operations, where you have to think about the context of what you are seeing. It can be as an organization what am I trying to protect, what is most sensitive to me, who are my key partners. An AI engine doesn't necessarily always have these things at its disposal. So, I think that there is always going to be a place for a human in the loop to bring that context and understanding. Q. With attacks becoming more sophisticated because of AI, has the budget on cybersecurity also increased to keep pace? How do you define and justify ROI on AI investments in cybersecurity? I think any organization needs to look at what they are actually protecting, what is the most sensitive information they are holding, what gives them competitive advantage, what drives revenue, and what is their risk appetite. Every organization is distinctly different. The controls they have in place and the budget allocated are all fundamentally different based on what they do. When you view that context in terms of threats, you begin to understand the threat actors. You can understand how they might impact the critical assets that are most important to your organization and your revenue. For many organizations, this means productivity outages- being unable to operate, manufacture products or pay staff. Then there are potential replacement costs if equipment is damaged as a result of the incident. Then there are response costs in the form of lawyers and PR firms to manage the fallout. We have seen cases where customers move to a competitor because they couldn't get the service they needed or because they lost trust. I think that is where you can start to understand what your cybersecurity investments should be.
[10]
'Cybersecurity response is moving slower than AI adoption, creating more issues'
The rapid pace of AI adoption has magnified the structural weaknesses within the cybersecurity industry. Tool sprawl and fragmented security layers have created a huge vulnerability gap and are hindering the ability to deal with emerging threats triggered by rapid AI adoption. According to Gartner, 40% of enterprise applications will be using task-specific AI agents in 2026, which makes visibility more critical than ever. In an interaction with CXOtoday, Binod Singh, Founder and CEO of Cross Identity, an identity and cybersecurity company, talks in detail about some of these challenges and how they can be addressed. Singh also dwells on the evolving role of Zero Trust architecture in the age of agentic AI, and how and where his company is using AI to negate AI-driven risks. Edited excerpts: Q. How do you see the Zero Trust architecture evolve in response to the growing use of GenAI and agentic AI? Zero Trust architecture has seen shifts based on evolving threats and technological advancements. But the most significant shift is happening because of AI. The traditional user-to-app security model is being replaced by a machine-to-machine (M2M) and agent-to-agent (A2A) reality. The first shift is the rise of non-human identity management. In the past, Zero Trust was essentially focused on verifying human employees. Today, autonomous agents outnumber humans in many enterprise environments. Organizations are moving away from static service accounts or shared application programming interface (API) keys. Agents are now assigned cryptographic identities. Every thought or action an agent takes must be signed and authenticated. The second major shift is the micro-segmentation at the prompt level. Traditional micro-segmentation is used to isolate networks or workloads. With GenAI, the attack surface is often the data layer itself, not the networks and workload layer. If an agent has access to a vector database, it might accidentally leak sensitive information through a prompt response. The third is just-in-time permissions. Agentic AI can move at machine speed, performing hundreds of tasks in a second. Enterprise cannot afford permanent permissions or standing privileges because you don't know which privilege has to be changed at what point in time. If an agent with standing privileges is compromised, it can lead to massive lateral movement. The fourth shift is continuous behavioral monitoring, because the behavior of the agent itself can be a little unpredictable. So, the shift that is happening here is that the security is moving from allow-deny list to continuous risk scoring. Q. Given the distributed nature of modern AI pipelines (cloud services, APIs, third-party models), what are the most critical control points for implementing a Zero Trust architecture? To understand this, let's take a look at the nature of the distributed AI pipeline. Here, the perimeter doesn't just disappear, it fragments into hundreds of micro boundaries. This is what differentiates AI pipelines from traditional ones. Implementing Zero Trust now requires moving security from the network edge to the logical touchpoints where data and logic intersect. Since enterprises frequently use a mix of OpenAI, Anthropic, and internal models, a centralized gateway becomes the primary policy enforcement point, serving as the single entry and exit point for all model traffic. The Zero Trust actions that we are working on should also perform request-response sanitization. It involves scrubbing the PII from outgoing prompts and scanning incoming model responses for jailbreak toxicity before they reach your internal systems. The second one is the M2M identity provider. In a distributed pipeline, the user is often a Python script in a container or third-party API. The Zero Trust action that we can take is to use the Workload Identity Federation, which will help us move away from static API keys in favor of secure, short-lived OpenID Connect (OIDC) tokens. Q. Are API-based integrations becoming a liability because of AI? It is a very well-known fact that APIs are one of the single points of failure when it comes to security breaches. Security vulnerabilities don't happen because the applications are fractured. The applications are pretty solid. The vulnerability happens at the seams where the two are integrated. And that is leading to what we call API bloat, which is leading to vendor vulnerabilities. The scary reality is that the response from our technologies (cybersecurity) is moving much slower than the AI technology itself, which is creating more and more issues. Q. Has the vendor and solutions sprawl worsened the fragmentation problem and made managing security stacks difficult? If you look at cyber security it is split into five different layers- data security, network security, application security, device security, and identity security. Each one of these layers are currently fragmented. Not only that, within a particular layer, you have more fragments. For instance, in network security, there are at least seven to eight areas, and each one has its own tools, which don't talk to each other. And the same thing is true for the other layers as well. That is why cyber security is so fragmented and that is the fundamental reason why we are not able to handle cyber security threats. The solution is to somehow make this whole cybersecurity run as one single machine where much of the issues because of the segmentation vanishes. Identity security is the only one that can become that glue, because it is a common point, whether you are talking at the network layer, data layer or any of the other layers. Unfortunately, the Identity layer itself is fragmented into nine different components. So, when a unifying layer is so fragmented, how can it become a unifier for the other layers? That is the fundamental question. Unless you do that unification, there is no way you can do bigger things. That is where cybersecurity as an infrastructure comes into play. To achieve this, we need a unified cybersecurity infrastructure, which eliminates integration taxes and API dependencies. Q. How and where are you using GenAI and agentic AI in your security solutions? We are trying to use these technologies across three primary layers. The first one is conversational governance. Instead of complex query builders, security professionals use natural language to ask things like what we can do, show me all agents that have not rotated their keys in the last 30 days, or generate a least privileged policy for our new HR chatbot. The second area is autonomous remediation. If one agent is being tricked by another agent, an autonomous security agent can instantly revoke the session and quarantine the identity without waiting for a human to log into a console. So that is going to be one of the major uses of agentic AI for us. The third area is smart user lifecycle management. The AI agent will handle the grunt work of identity provisioning. It involves identifying zombie accounts and performing automated access reviews by analyzing actual usage data rather than the static job titles. These are some areas where we are using AI, and it is making things far more effective.
Share
Share
Copy Link
An open-source AI security testing platform called CyberStrikeAI was used to breach over 600 Fortinet FortiGate devices across 55 countries in a sophisticated AI-powered attack. Meanwhile, enterprise AI agents are emerging as potential insider threats, with rogue agents capable of accessing credentials, modifying databases, and initiating unauthorized communications on behalf of companies.
An unknown threat actor recently leveraged an open-source security tool called CyberStrikeAI to execute AI-powered attacks targeting Fortinet FortiGate appliances, compromising over 600 devices across 55 countries
2
. The campaign, which utilized generative AI services like Anthropic Claude and DeepSeek, represents a concerning evolution in how threat actors are deploying offensive AI security tools to automate and scale their operations2
.
Source: Hacker News
Team Cymru researchers traced the attacks to IP address 212.11.64[.]250, which was observed running CyberStrikeAI on port 8080 and communicating with targeted FortiGate devices
3
. The open-source security tool, built in Go and integrating over 100 security tools, enables automation from conversational commands to vulnerability discovery, attack-chain analysis, and result visualization2
. Between January 20 and February 26, 2026, researchers identified 21 unique IP addresses running CyberStrikeAI, with servers primarily hosted in China, Singapore, and Hong Kong3
.While external AI threats escalate, enterprise AI agents are creating new AI security risks from within organizations. These autonomous agents, designed to handle tasks like procurement, communications, and database management, can become the ultimate AI insider threat when they malfunction or are compromised
4
. Unlike external attackers who must breach defenses, rogue AI agents already possess credentials and access to spend money, modify files, and initiate communications on behalf of companies4
.Recent incidents illustrate these AI security risks. In 2022, an Air Canada chatbot promised a discount that wasn't available, leading to a lawsuit the company lost
4
. In 2025, an AI hiring bot exposed personal information from millions of McDonald's job applicants, with the AI company reportedly using the password 1234564
. Security researchers also demonstrated that a prompt-injection attack could expose Salesforce's CRM platform to potential data theft4
.Vulnerabilities in widely-used platforms underscore the scope of the problem. ServiceNow AI Platform contained a flaw allowing unauthenticated users to impersonate authenticated users and drive privileged agentic workflows
4
. Amazon Q's VS Code extension suffered a GitHub token error enabling malicious code injection directly into repositories4
. OpenAI's Codex CLI coding agent was found vulnerable to attacks where embedded harmful instructions in project files could trigger malicious commands on developers' machines4
.Google's Threat Intelligence Group documented a significant shift in how adversaries exploit AI capabilities. While threat actors initially used Gemini for basic productivity tasks like research and troubleshooting code, they now deploy AI-enabled malware in active operations
5
. This marks a new operational phase involving tools that dynamically alter behavior mid-execution5
.
Source: Futurism
Anthropic detected a professional influence-as-a-service operation using Claude not just for content generation, but to decide when social media bot accounts would comment, like, or re-share posts from authentic users
5
. The company also observed credential stuffing operations, recruitment fraud campaigns, and novice actors using AI to enhance their technical capabilities for malware generation beyond their skill level5
.Deepfake technology represents another escalating concern. ByteDance's Seedance 2.0 launch demonstrated video generation capabilities so convincing that distinguishing deepfakes from authentic content becomes nearly impossible
5
. Voice cloning now requires as little as three seconds of audio to replicate someone's voice and conversational tone5
.Related Stories
Business leaders emphasize that effective AI security governance requires cross-functional collaboration. Barry Panayi, group chief data officer at Howden, noted that cybersecurity knowledge must extend beyond IT specialists, with professionals across all roles understanding AI security risks
1
. The multifaceted nature of AI cybersecurity demands new roles and responsibilities, with teams sharing knowledge to create more powerful mitigation strategies1
.
Source: CRN
Nick Pearson, CIO at Ricoh Europe, stressed that managing cybersecurity in an age of AI requires returning to fundamentals: secure by design, established standards, and teams that analyze and balance capabilities
1
. Rather than creating separate frameworks for AI, organizations should integrate AI into existing data governance structures that address issues like data leakage1
.Martin Hardy, cyber portfolio and architecture director at Royal Mail, highlighted the importance of AI governance forums that don't stop AI usage but ensure appropriate oversight
1
. Understanding where data resides and what data feeds AI solutions is key to success, as is recognizing that AI serves as an aid rather than a complete answer1
.John-David Lovelock, chief forecaster at Gartner, noted that organizations cannot yet benefit from measurable, definable, and certifiable AI safety standards
1
. End-user security requirements remain unmet by many AI providers, creating a gap between expectations and reality1
.The CyberStrikeAI developer, who goes by the alias Ed1s0nZ, has published several tools demonstrating interest in exploitation and jailbreaking AI models
2
. The developer's GitHub activities indicate interactions with organizations supporting potentially Chinese government state-sponsored cyber operations, including Chinese private sector firms with known ties to the Ministry of State Security2
. References to receiving a CNNVD 2024 Vulnerability Reward Program award were later scrubbed from the developer's profile3
.As adversaries increasingly embrace AI-native orchestration engines, defenders must prepare for environments where tools like CyberStrikeAI significantly lower the barrier to entry for complex network exploitation
3
. The combination of automation, generative AI capabilities, and integrated security tools enables even low-skilled operators to execute sophisticated attacks3
.Summarized by
Navi
[2]
[3]
21 Feb 2026•Technology

13 Nov 2025•Technology

12 Feb 2026•Technology

1
Business and Economy

2
Policy and Regulation

3
Technology
