2 Sources
2 Sources
[1]
Friend or foe? AI: The new cybersecurity threat and solutions
Cyber-attacks have more than doubled worldwide in just four years, from 818 per organization in 2021 to almost 2,000 per organization last year, according to the World Economic Forum (WEF). It's a staggering statistic. And small businesses are particularly exposed, now seven times more likely to report insufficient cyber-resilience than they were in 2022. Whether we like it or not, artificial intelligence (AI) tools have had a big role to play here, not just with the increasing volume of attacks but also the sophistication. Risks are now emerging at every layer of the AI stack, from prompt injection and data leakage to AI-powered bot scraping and deepfakes. As a recent industry report reveals, attackers are now using large language models (LLMs) to craft convincing phishing campaigns, write polymorphic malware, and automate social-engineering at scale. The result is a threat environment that learns, adapts, and scales faster than human analysts can respond. What lies beneath the layers? AI systems are built in layers, and each one brings its own weak spots. At the environment layer, which provides computing, networking and storage, the risks resemble those in traditional IT but the scale and complexity of AI workloads make attacks harder to detect. The model layer is where manipulation starts. Prompt injection, non-compliant content generation and data exfiltration are now among the top threats, as highlighted in the OWASP 2025 Top 10 for LLM Applications. The context layer, home to retrieval-augmented generation (RAG) databases and memory stores, has become a prime target for data theft. Meanwhile, at the tools and application layers, over-privileged APIs and compromised AI agents can give attackers the keys to entire workflows. In other words, the attack surface is expanding in every direction, and with it, the need for smarter defenses. The answer isn't to abandon AI but to use AI to secure AI. So a comprehensive security framework needs to span the full AI lifecycle, protecting three essential layers: model infrastructure, the model itself, and AI applications. When security is embedded into business workflows rather than bolted on afterward, organizations gain efficient, low-latency protection without sacrificing convenience or performance. Security teams are already deploying intelligent guardrails that scan prompts for malicious intent, detect anomalous API behavior and watermark generated content for traceability. The latest generation of AI-driven security operations applies multi-agent models to analyze billions of daily events, flag emerging risks in real time and automate first-response actions. According to PwC's Digital Trust Insights 2026 survey, AI now tops the list of investment priorities for Chief Information Security Officers (CISOs) worldwide, a sign that enterprises are finally treating cyber resilience as a learning system, not a static checklist. Threats that lurk in the shadows Yet even as enterprises strengthen their defenses, a new and largely self-inflicted risk is taking shape inside their own networks. It's called shadow AI. In most organizations, employees are using generative tools to summarize reports, write code or analyze customers, often without official approval or data-governance controls. According to one report from Netskope, around 90 percent of enterprises now use GenAI applications, and more than 70 per cent of those tools fall under shadow IT. Every unmonitored prompt or unvetted plug-in becomes a potential leak of sensitive data. Internal analysis across the industry suggests that nearly 45 percent of AI-related network traffic contains sensitive information, from intellectual property to customer records. In parallel, AI-powered bots are multiplying at speed. Within six months, bot traffic linked to data scraping and automated requests has quadrupled. While AI promises smarter, faster operations, it's also consuming ever-greater volumes of confidential data, creating more to defend and more to lose. A safety-belt for AI Governments and regulators are beginning to recognize the scale of the challenge. Many AI governance rules all point to a future where organizations will be expected to demonstrate not only compliance, but continuous visibility over their AI systems. Security postures will need to account for model training, data provenance, and the behavior of autonomous agents, not just network traffic or access logs. For many, that means embedding security directly into the development pipeline, adopting zero-trust architectures, and treating AI models as living assets that require constant monitoring. Looking ahead, the battle lines are already being redrawn. The next phase of cybersecurity will depend on a dual engine - one that protects AI systems while also using AI to detect and neutralize threats. As machine-learning models evolve, so too must the defenses that surround them. Static rules and manual responses can't keep pace with attackers who automate creativity and exploit speed. What's needed is an ecosystem that learns as fast as it defends. That shift is already underway. Multi-agent security platforms now coordinate detection, triage and recovery across billions of daily events. Lightweight, domain-specific models filter out the noise, while larger reasoning models identify previously unseen attack patterns. It's an intelligence pipeline that mirrors the adversaries, only this one's built for defense. The application of intelligence The future of digital security will hinge on collaboration between human insight and machine intuition. In practical terms, that means re-training the workforce, as much as re-architecting the infrastructure. Analysts who can interpret AI outputs, data scientists who understand risk, and policymakers who build trust through transparency are very much needed. The long game is about confidence, not just resilience. Confidence that the systems powering modern life are learning to protect themselves. Because ultimately, AI isn't the villain of this story. The same algorithms that make attacks more potent can also make protection more precise. The question for business leaders everywhere is whether they'll invest fast enough to let intelligence, not inertia, define the next chapter of cybersecurity. We've featured the best endpoint protection software. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
[2]
AI vs AI: Defense Without Humans in the Loop | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Seconds later, another system steps in. A defensive AI spots the abnormal pattern, tightens controls and pauses a set of transactions before any money moves or data leaves the company. By the time a human analyst reviews the dashboard, the episode is already over. This is the new operational reality facing enterprise security teams. The most consequential decisions inside corporate networks are increasingly made not by analysts in a security operations center, but by competing artificial intelligence systems acting autonomously. Offensive AI agents probe APIs, manipulate retrieval layers and adapt continuously to countermeasures. Defensive agents triage alerts, isolate workflows and remediate vulnerabilities without waiting for human approval. What once required coordinated attackers and days of reconnaissance now unfolds in automated cycles, often before anyone realizes a conflict has begun. The World Economic Forum reported that 87% of organizations believe AI-related vulnerabilities are increasing risk across their environments. The threat landscape has shifted from AI as a tool to AI as an operation embedded throughout the attack lifecycle. Gartner projects that 17% of cyberattacks will employ generative AI by 2027, signaling that AI-driven techniques are moving from experimentation to mainstream threat capability. The result is compounding scale and variability. Artificial intelligence systems can generate unique attack instances while pursuing the same objective, weakening signature-based detection models that rely on pattern repetition. When each payload or prompt sequence is slightly different, static defenses struggle to keep pace. The attack surface is also expanding beyond traditional endpoints. Microsoft researchers have highlighted how AI integrations themselves can become entry points, particularly through indirect prompt injection. In these scenarios, malicious instructions are embedded in content that enterprise AI systems later ingest, redirecting agent behavior without breaching hardened infrastructure. In response, enterprises and investors are shifting toward autonomous remediation. Bain Capital Ventures and Greylock led a $42 million Series A in Cogent Security, betting that AI agents can compress the gap between vulnerability detection and resolution. The scale of the backlog illustrates the urgency. More than 48,000 new common vulnerabilities and exposures were reported in 2025, per TechTarget, a 162% increase from five years earlier, with attackers often probing new disclosures within minutes. Cogent's model reflects a broader architectural change. Rather than replacing existing tools, it aggregates signals from scanners, asset inventories and cloud security platforms, then uses AI to prioritize and trigger remediation workflows automatically through ticketing and patching systems. "Security teams are drowning in coordination work, chasing down system owners, writing tickets, proving fixes happened," Cogent CEO Vineet Edupuganti told Fortune. The company says customers are resolving their most serious vulnerabilities 97% faster using autonomous workflows. In optimal scenarios, defensive agents remove the need for human intervention on a specific class of vulnerability. In others, they compress triage and coordination, so engineers focus on higher-order judgment. The common thread is speed. Human-speed remediation is no longer sufficient when AI-driven attackers operate in continuous loops. Data quality remains a constraint. Behavioral detection and anomaly classification depend on high-fidelity telemetry and clean baselines. Defensive systems trained on incomplete or noisy data risk generating excessive false positives or missing novel attack paths entirely. At the same time, attackers are increasingly deploying fraudulent AI assistants designed to impersonate legitimate tools and harvest sensitive user information. As PYMNTS reported, these malicious assistants can quietly collect credentials and financial data by exploiting user trust in AI interfaces, reinforcing the need for enterprises to secure not just their networks, but the AI agents themselves.
Share
Share
Copy Link
Cyberattacks have surged to nearly 2,000 per organization annually, driven by AI-powered tools that craft sophisticated phishing campaigns and polymorphic malware. As offensive AI agents probe systems autonomously, defensive AI systems respond without human intervention, creating a new battlefield where machine learning models clash in milliseconds before analysts even see the threat.
The cybersecurity threat environment has reached a critical inflection point. According to the World Economic Forum
1
, cyberattacks have more than doubled worldwide in just four years, escalating from 818 per organization in 2021 to almost 2,000 per organization last year. Small businesses face particularly severe exposure, now seven times more likely to report insufficient cyber-resilience than they were in 2022. AI has emerged as both the catalyst for this surge and the potential solution, fundamentally reshaping how organizations defend their digital assets.
Source: PYMNTS
Attackers are now using Large Language Models (LLMs) to craft convincing phishing campaigns, write polymorphic malware, and automate social-engineering at scale
1
. The result is a threat landscape that learns, adapts, and scales faster than human analysts can respond. Gartner projects that 17% of cyberattacks will employ generative AI by 2027, signaling that AI-powered attacks are moving from experimentation to mainstream threat capability2
.Risks are now emerging at every layer of the AI stack, from prompt injection and data leakage to AI-powered bot scraping and deepfakes
1
. AI systems are built in layers, and each one brings its own vulnerabilities. At the model layer, prompt injection, non-compliant content generation and data exfiltration rank among the top threats, as highlighted in the OWASP 2025 Top 10 for LLM Applications1
.
Source: TechRadar
The context layer, home to retrieval-augmented generation databases and memory stores, has become a prime target for data theft. At the tools and application layers, over-privileged APIs and compromised AI agents can give attackers access to entire workflows. Microsoft researchers have highlighted how AI integrations themselves can become entry points, particularly through indirect prompt injection where malicious instructions are embedded in content that enterprise AI systems later ingest
2
.The most consequential decisions inside corporate networks are increasingly made not by analysts in security operations centers, but by competing artificial intelligence systems acting autonomously
2
. When an offensive AI probes APIs and manipulates retrieval layers, a defensive AI spots the abnormal pattern, tightens controls and pauses transactions before any money moves or data leaves the company. By the time a human analyst reviews the dashboard, the episode is already over.According to PwC's Digital Trust Insights 2026 survey, AI now tops the list of investment priorities for Chief Information Security Officers worldwide, a sign that enterprises are finally treating cyber resilience as a learning system
1
. Security teams are deploying intelligent guardrails that scan prompts for malicious intent, detect anomalous API behavior and watermark generated content for traceability. The latest generation of AI-driven defense applies multi-agent models to analyze billions of daily events, flag emerging risks in real time and automate first-response actions.Even as enterprises strengthen their defenses, shadow AI has emerged as a largely self-inflicted risk taking shape inside corporate networks
1
. In most organizations, employees are using generative tools to summarize reports, write code or analyze customers, often without official approval or data-governance controls. According to Netskope, around 90 percent of enterprises now use generative AI applications, and more than 70 percent of those tools fall under shadow IT1
.Internal analysis suggests that nearly 45 percent of AI in cybersecurity traffic contains sensitive information, from intellectual property to customer records
1
. Within six months, bot traffic linked to data scraping and automated requests has quadrupled. Attackers are increasingly deploying fraudulent AI assistants designed to impersonate legitimate tools and harvest sensitive user information2
.Related Stories
The scale of vulnerability detection demands autonomous solutions. More than 48,000 new common vulnerabilities and exposures were reported in 2025, a 162% increase from five years earlier, with attackers often probing new disclosures within minutes
2
. Bain Capital Ventures and Greylock led a $42 million Series A in Cogent Security, betting that AI agents can compress the gap between vulnerability detection and resolution2
.Cogent Security's model aggregates signals from scanners, asset inventories and cloud security platforms, then uses AI to prioritize and trigger remediation workflows automatically. "Security teams are drowning in coordination work, chasing down system owners, writing tickets, proving fixes happened," Cogent CEO Vineet Edupuganti told Fortune
2
. The company says customers are resolving their most serious vulnerabilities 97% faster using autonomous workflows.Governments and regulators are beginning to recognize the scale of the challenge, with many AI governance rules pointing to a future where organizations will be expected to demonstrate not only compliance, but continuous visibility over their AI systems
1
. Security postures will need to account for model training, data provenance, and the behavior of autonomous agents, not just network traffic or access logs.The World Economic Forum reported that 87% of organizations believe AI-related vulnerabilities are increasing risk across their environments
2
. The next phase of cybersecurity will depend on a dual engine that protects AI systems while also using AI to detect and neutralize threats. As machine-learning models evolve, so too must the AI-powered security measures that surround them. Static rules and manual responses can't keep pace with threats that operate in continuous loops, adapting faster than human analysts can respond.Summarized by
Navi
1
Policy and Regulation

2
Policy and Regulation

3
Business and Economy
