7 Sources
[1]
Weaponized AI is making hackers faster, more aggressive, and more successful
New research from CrowdStrike confirms that hackers are exploiting AI to help them deliver more aggressive attacks in less time, with the tech also democratizing lesser-skilled hackers to more advanced code. However, besides this, they're also exploiting the same AI systems that are being used by enterprises - according to CrowdStrike, hackers are targeting the tools used to build AI agents, allowing them to gain access, steal credentials, and deploy malware. CrowdStrike is most worried about agentic AI systems, suggesting that they've now become a "core part of the enterprise attack surface." The security company says it observed "multiple" hackers exploiting vulnerabilities in the tools used to build AI agents, which marks a major shift from patterns of old. Until now, humans have almost always been the primary entry point into a company, but now, CrowdStrike is worried that "autonomous workflows and non-human identities [are] the next frontier of adversary exploitation." "We're seeing threat actors use GenAI to scale social engineering, accelerate operations, and lower the barrier to entry for hands-on-keyboard intrusions," Head of Counter Adversary Operations Adam Meyers explained. Funklocker and SparkCat are two examples of GenAI-built malware in the real world, while DPRK-nexus Famous Chollima has also been observed using generative AI to automate its insider attack program across all phases. Scattered Spider, a group believed to consist of UK and US nationals, even managed to deploy ransomware within 24 hours of accessing systems. "Adversaries are treating these agents like infrastructure, attacking them the same way they target SaaS platforms, cloud consoles, and privileged accounts," Meyers added. Still, even though technologies like AI are playing an increasing role in speeding up attacks, CrowdStrike found that four in five (81%) interactive intrusions were malware-free - relying on human hands on keyboards to stay undetected.
[2]
Black Hat 2025: Why your AI tools are becoming the next insider threat
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Cloud intrusions increased by 136% in the past six months. North Korean operatives infiltrated 320 companies using AI-generated identities. Scattered Spider now deploys ransomware in under 24 hours. However, at Black Hat 2025, the security industry demonstrated that it finally has an answer that works: agentic AI, delivering measurable results, not promises. CrowdStrike's recent identification of 28 North Korean operatives embedded as remote IT workers, part of a broader campaign affecting 320 companies, demonstrates how agentic AI is evolving from concept to practical threat detection. While nearly every vendor at Black Hat 2025 had performance metrics available, either from beta programs in process or full-production agentic AI deployments, the strongest theme was operational readiness over hype or theoretical claims. CISOs VentureBeat spoke with at Black Hat are reporting the ability to process significantly more alerts with current staffing levels, with investigation times improving substantially. However, specific gains depend on the implementation maturity and complexity of the use case. What's notable is the transition from aspirational roadmaps to real-world outcomes. VentureBeat is also starting to see security teams begin to achieve practical, real efficiency gains that translate to the metrics boards ask about. These include reducing the mean time to investigate (MTTI), improving threat detection rates and better resource utilization. Black Hat 2025 marked an inflection point where the conversation shifted from AI's potential to its measured impact on security operations. The agentic AI arms race shifts from promises to production The conversation at Black Hat 2025 was dominated by agentic AI, with many of the sessions dedicated to how attackers have or can easily compromise agents. VentureBeat observed over 100 announcements promoting new agentic AI applications, platforms or services. Vendors are producing use cases and results. That's a welcome change from the many promises made in prior years and at previous years. There's an urgency to close hype gaps and deliver results. CrowdStrike's Adam Meyers, head of counter adversary operations, articulated what's driving this urgency in an interview with VentureBeat: "Agentic AI really becomes the platform that allows SOC operators to build those automations, whether they're using MCP servers to get access to APIs. We're starting to see more and more organizations leveraging our agentic AI to help them integrate with the Falcon and CrowdStrike systems." VentureBeat believes the scale of the threat demands this response. "When they're moving at that speed, you can't wait," Meyers emphasized, referencing how some adversaries now deploy ransomware in under 24 hours. "You need to have human threat hunters in the loop that are making you know, as soon as the adversary gets access, or as soon as the adversary pops up, they're there, and they're doing hand-to-hand combat with those adversaries." "Last year, we looked at 60 billion hunting leads that result in about 13 million investigations, 27,000 customer escalations and 4000 emails that we started sending to customers," Meyers revealed, emphasizing the scale at which these systems now operate. Microsoft Security unveiled significant enhancements to its Security Copilot, introducing autonomous investigation capabilities that can correlate threats across Microsoft Defender, Sentinel and third-party security tools without human intervention. Palo Alto Networks demonstrated Cortex XSOAR's new agentic capabilities, showing how their platform can now autonomously triage alerts, conduct investigations and even execute remediation actions within defined guardrails. Cisco made one of Black Hat's most significant announcements, releasing Foundation-sec-8B-Instruct, the first conversational AI model built exclusively for cybersecurity. This eight-billion-parameter model outperforms much larger general-purpose models, including GPT-4o-mini, on security tasks while running on a single GPU. What sets this release apart is its fully open-source architecture. Foundation-sec-8B-Instruct ships with completely open weights under a permissive license, enabling security teams to deploy it on-premises, in air-gapped environments or at the edge without vendor lock-in. The model is freely available on Hugging Face, accompanied by the Foundation AI Cookbook featuring deployment guides and implementation templates. "Foundation-sec-8B-Instruct is live, open, and ready to defend. Download it, prompt it and help shape the future of AI-powered cybersecurity," states Yaron Singer, VP of AI and Security at Foundation, emphasizing the collaborative potential of this open-source approach. SentinelOne took a different approach, emphasizing their Purple AI's ability not just to investigate but actually "think ahead" or predict adversary moves based on behavioral patterns and proactively adjusting defenses. CrowdStrike's threat intelligence reveals how adversaries like FAMOUS CHOLLIMA are weaponizing gen AI at every stage of insider threat operations, from creating synthetic identities to managing multiple simultaneous employment positions. Source: CrowdStrike 2025 Threat Hunting Report How the North Korean threat changed everything fast FAMOUS CHOLLIMA operatives infiltrated over 320 companies in the past year. That's a 220% year-over-year increase, representing a fundamental shift in enterprise security threats. "They're using AI through the entire process," Meyers told VentureBeat during an interview. "They're using generative AI to create LinkedIn profiles, to create resumes and then they go into the interview, and they're using deep fake technology to change their appearance. They're using AI to answer questions during the interview process. They're using AI, once they get hired, to build the code and do the work that they're supposed to do." The infrastructure supporting these operations is sophisticated. One Arizona-based facilitator maintained 90 laptops to enable remote access. Operations have expanded beyond the U.S. to France, Canada and Japan as adversaries diversify their targeting. CrowdStrike's July data reveals the scope: 33 FAMOUS CHOLLIMA encounters, with 28 confirmed as malicious insiders who had successfully obtained employment. These are AI-enhanced operators working within organizations, using legitimate credentials, rather than relying on traditional malware attacks that security tools can detect. Why the human element remains vital Despite the technological advances, a consistent theme across all vendor presentations was that agentic AI augments rather than replaces human analysts. "Agentic AI, as good as it is, is not going to replace the humans that are in the loop. You need human threat hunters out there that are able to use their insight and their know-how and their intellect to come up with creative ways to try to find these adversaries," Meyers emphasized. Every major vendor echoed this human-machine collaboration model. Splunk's announcement of Mission Control emphasized how its agentic AI serves as a "force multiplier" for analysts, handling routine tasks while escalating complex decisions to humans. Even the most ardent advocates of automation acknowledged that human oversight remains essential for high-stakes decisions and creative problem-solving. Competition shifts from features to results Despite fierce competition in the race ot deliver agentic AI solutions for the SOC, Black Hat 2025 ironically showed a more unified approach to cybersecurity than any previous event. Every major vendor emphasized three critical components: reasoning engines that can understand context and make nuanced decisions. These action frameworks enable autonomous response within defined boundaries and learning systems that continuously improve based on outcomes. Google Cloud Security's Chronicle SOAR exemplified this shift, introducing an agentic mode that automatically investigates alerts by querying multiple data sources, correlating findings and presenting analysts with complete investigation packages. Even traditionally conservative vendors have embraced the transformation, with IBM and others introducing autonomous investigation capabilities to their existing installations. The convergence was apparent: the industry has moved beyond competing on AI presence to competing on operational excellence. The cybersecurity industry is witnessing adversaries leverage GenAI across three primary attack vectors, forcing defenders to adopt equally sophisticated AI-powered defenses. Source: CrowdStrike 2025 Threat Hunting Report Many are predicting that AI will become the next insider threat Looking forward, Black Hat 2025 also highlighted emerging challenges. Meyers delivered perhaps the most sobering prediction of the conference: "AI is going to be the next insider threat. Organizations trust those AIs implicitly. They are using it to do all of these tasks, and the more comfortable they become, the less they're going to check the output." This concern sparked discussions about standardization and governance. The Cloud Security Alliance announced a working group focused on agentic AI security standards, while several vendors committed to collaborative efforts around AI agent interoperability. CrowdStrike's expansion of Falcon Shield to include governance for OpenAI GPT-based agents, combined with Cisco's AI supply chain security initiative with Hugging Face, signals the industry's recognition that securing AI agents themselves is becoming as important as using them for security. The velocity of change is accelerating. "Adversaries are moving incredibly fast," Meyers warned. "Scattered spider hit retail back in April, they were hitting insurance companies in May, they were hitting aviation in June and July." The ability to iterate and adapt at this speed means organizations can't afford to wait for perfect solutions. Bottom Line This year's Black Hat confirmed what many cybersecurity professionals saw coming. AI-driven attacks now threaten their organizations across a widening array of surfaces, many of them unexpected. Human resources and hiring became the threat surface no one saw coming. FAMOUS CHOLLIMA operatives are penetrating every possible U.S. and Western technology company they can, grabbing immediate cash to fuel North Korea's weapons programs while stealing invaluable intellectual property. This creates an entirely new dimension to attacks. Organizations and the security leaders guiding them would do well to remember what hangs in the balance of getting this right: your businesses' core IP, national security, and the trust customers have in the organizations they do business with.
[3]
Cloud breaches and identity hacks explode in CrowdStrike's latest threat report - SiliconANGLE
Cloud breaches and identity hacks explode in CrowdStrike's latest threat report A new report out today from CrowdStrike Holdings Inc. has revealed a dramatic escalation in adversary sophistication, with cloud-focused attacks, identity-driven intrusions and generative artificial intelligence adoption driving a major shift in the cybersecurity threat landscape. The findings come from the CrowdStrike 2025 Threat Hunting report, based on a year of data through June 30 from the company's OverWatch managed threat hunting operations, threat intelligence team and telemetry across the CrowdStrike Falcon platform. The report is being released to coincide with the annual Black Hat USA 2025 conference this week in Las Vegas. Headlining the report was a finding that interactive intrusions rose 27% year-over-year between July 2024 and June 2025, with a highly surprising 81% of attacks found to be malware-free. CrowdStrike said the shift away from leading with malware signals a move toward stealthier techniques such as credential abuse, lateral movement and defense evasion. Formal adversaries, such as e-crime groups and advanced persistent threat groups, were found to have accounted for 73% of all interactive intrusions. Groups such as Scattered Spider and Curly Spider are running high-volume campaigns across multiple sectors. Cloud environments remained a popular target, with CrowdStrike observing a 136% increase in cloud intrusions in the first half of 2025 alone, compared with all of 2024. The observed threat groups were found to demonstrate advanced tactics such as exploiting misconfigurations, abusing instance metadata services and using cloud control planes for lateral movement and persistent access. One group, Genesis Panda, was found to be using cloud infrastructure to host payloads and exfiltrate data, highlighting the growing sophistication of allegedly state-aligned attackers. The government and telecommunications sectors were also popular targets. The report detailed a 185% spike in government-targeted attacks, largely driven by Russia-linked groups such as Primitive Bear and a 130% jump in telecommunications intrusions. The sectors were found to remain high-value targets due to their access to sensitive data, infrastructure and potential downstream impact. The report also, and not surprisingly, highlights the increasingly strategic use of generative AI by adversaries. The North Korea-linked hacking group Famous Chollima emerged as the most generative AI-proficient actor, conducting more than 320 insider threat operations in the past year. Operatives from the group reportedly used AI tools to craft compelling resumes, generate real-time deepfakes for video interviews and automate technical work across multiple jobs. Scattered Spider, which made headlines in 2024 when one of its key members was arrested in Spain, returned in 2025 with voice phishing and help desk social engineering that bypasses multifactor authentication protections to gain initial access. In one case highlighted in the report, Scattered Spider operatives moved from account compromise to ransomware deployment in just 24 hours, 32% faster than their average in 2024. The group's ability to compromise privileged accounts and pivot across software-as-a-service platforms, identity systems and cloud infrastructure is said in the report to reflect a growing trend of adversaries exploiting cross-domain blind spots. The report makes a number of recommendations, including advising organizations to implement phishing-resistant MFA, isolate privileged accounts and strengthen help desk protocols to guard against social engineering. Organizations are also advised to implement continuous monitoring, if they haven't done so already, to detect anomalous behavior such as unusual login times, privilege escalations and atypical data access.
[4]
Bad code, malicious models and rogue agents: Cybersecurity researchers scramble to prevent AI exploits - SiliconANGLE
Bad code, malicious models and rogue agents: Cybersecurity researchers scramble to prevent AI exploits When it comes to dealing with artificial intelligence, the cybersecurity industry has officially moved into overdrive. Vulnerabilities in coding tools, malicious injections into models used by some of the largest companies in the world and agents that move across critical infrastructure without security protection have created a whole new threat landscape seemingly overnight. "Who's feeling like they really understand what's going on?" asked Jeff Moss, president of DEF CON Communications Inc. and founder of the Black Hat conference, during his opening keynote remarks on Wednesday. "Nobody. It's because we have a lot of change occurring at the same time. We don't fully know what AI will disrupt yet." While the full scope of the change has yet to become apparent, this year's Black Hat USA gathering in Las Vegas provided plenty of evidence that AI is fueling a whole new class of vulnerabilities. A starting point identified by security researchers is in the code itself, which is increasingly written by autonomous AI agents. "These systems are mimics, they're incredible mimics," cognitive scientist and AI company founder Gary Marcus said during a panel discussion at the conference. "Lots of bad code is going to be written because these systems don't understand secure code." One problem identified by the cybersecurity community is that shortcuts using AI coding tools are being developed without thinking through the security consequences. Researchers from Nvidia Corp. presented findings that an auto-run mode on the AI-powered code editor Cursor allowed agents to run command files on a user's machine without explicit permission. When Nvidia presented this potential vulnerability to Anysphere Inc.'s Cursor in May, the vibe coding company responded by offering users an ability to disable the auto-run feature, according to Becca Lynch, offensive security researcher at Nvidia, who spoke at the conference on Wednesday. Vulnerabilities in the interfaces that support AI, such as coding tools, represent a growing area of concern in the security world. Part of this issue can be found in the sheer number of application programming interface endpoints that are being generated to run AI. Companies with generative AI have at least five times more API endpoints, according to Chuck Herrin, field chief information security officer at F5 Inc. "We're blowing up that attack surface because a world of AI is a world of APIs," said Herrin, who spoke at Black Hat's AI Summit on Tuesday. "There's no securing AI without securing the interfaces that support it." Securing those interfaces may be more difficult than originally imagined. Running AI involves a reliance on vector databases, training frameworks and inference servers, such as those provided by Nvidia. The Nvidia Container Toolkit enables use of the chipmaker's GPUs within Docker containers, including those hosting inference servers. Security researchers from Wiz Inc. presented recent findings of a Nvidia Container Toolkit vulnerability that posed a major threat to managed AI cloud services. Wiz found that the vulnerability allowed attackers to potentially access or manipulate customer data and proprietary models within 37% of cloud environments. Nvidia issued an advisory in July and provided a fix in its latest update. "Any provider of cloud services was vulnerable to our attack," said Hillai Ben Sasson, senior security researcher at Wiz. "AI security is first and foremost infrastructure security." The expanding use of AI is being driven by adoption of large language models, an area of particular interest to the security community. The sheer volume of model downloads has attracted attention, with Meta Platforms Inc. reporting that its open AI model family, Llama, reached 1 billion downloads in March. Yet despite the popularity of LLMs, security controls for them have not kept pace. "The $300 billion we spend on information security does not protect AI models," Malcolm Harkins, chief security and trust officer at HiddenLayer Inc., said in an interview with SiliconANGLE. "The models are exploitable because there is no mitigation against vulnerability." This threat of exploitation has cast a spotlight on popular repositories where models are stored and downloaded. At last year's Black Hat gathering, researchers presented evidence they had breached three of the largest AI model repositories. This has become an issue of greater concern as enterprises continue to implement AI agents, which rely on LLMs to perform key tasks. "The LLM that drives and controls your agents can potentially be controlled by attackers," Nvidia's Lynch said this week. "LLMs are uniquely vulnerable to adversarial manipulation." Though major repositories have responded to breach vulnerabilities identified and shared by security researchers, there has been little evidence that the model repository platforms are interested in vetting their inventories for malicious code. It's not because the problem is a technological challenge, according to Chris Sestito, co-founder and CEO of HiddenLayer. "I believe you need to embrace the technology that exists," Sestito told SiliconANGLE. "I don't think the lift is that big." If model integrity fails to be protected, this will likely have repercussions for the future of AI agents as well. Agentic AI is booming, yet the lack of security controls around the autonomous software is also beginning to generate concern. Last month, cybersecurity company Coalfire Inc. released a report which documented its success in hacking agentic AI applications. Using adversarial prompts and working with partner standards such as those from the National Institute of Standards and Technology or NIST, the company was able to demonstrate new risks in compromise and data leakage. "There was a success rate of 100%," Apostol Vassilev, research team supervisor at NIST, said during the AI Summit. "Agents are touching the same cyber infrastructure that we've been trying to protect for decades. Make sure you are exposing this technology only to assets and data you are willing to live without." Despite the concerns around agentic AI vulnerability, the security industry is also looking to adopt agents to bolster protection. An example of this can be found at Simbian Inc. which provides fully autonomous AI security operations center agents using toolchains and memory graphs to ingest signals, synthesize insight and make decisions in real time for threat containment. Implementing agents for security has been a challenging problem, as Simbian co-founder and CEO Ambuj Kumar readily admitted. He told SiliconANGLE that his motivation was a need to protect critical infrastructure and keep essential services such as medical care safe. "The agents we are building are inside your organization," Kumar said. "They know where the gold coins are and they secure them." Another approach being taken within the cybersecurity industry to safeguard agents is to bake attestation into the autonomous software through certificate chains at the silicon level. Anjuna Security Inc. is pursuing this solution through an approach known as "confidential computing." The concept is to process data through a Trusted Execution Environment, a secure area within the processor where code can be executed safely. This is the path forward for agentic AI, according to Ayal Yogev, co-founder and CEO of Ajuna. His company now has three of the world's top 10 banks in its customer set, joining five next-generation payments firms and the U.S. Navy as clients. "It becomes an identity problem," said Yogev, who spoke with SiliconANGLE in advance of the Black Hat gathering. "If an agent is doing something for me, I need to make sure they don't have permissions beyond what the user has. Confidential computing is the future of computing." For the near term, the future of computing is heavily dependent on the AI juggernaut, and this dynamic is forcing the cybersecurity community to speed up the research process to identify vulnerabilities and pressure platform owners to fix them. During much of the Black Hat conference this week, numerous security practitioners noted that even though the technology may be spinning off new solutions almost daily, the security problems have been seen before. This will involve a measure of discipline and control, a message that notable industry figures such as Chris Inglis, the country's first National Cyber Director and former deputy director of the National Security Agency, has been reinforcing for at least the past two years. In a conversation with SiliconANGLE, the former U.S. Air Force officer and command pilot noted that today's cars are nothing more than controllable computers on wheels. "I do have the ability to tell that car what to do," Inglis said. "We need to fly this airplane." Can the cybersecurity industry regain a measure of control as AI hurtles through the skies? As seen in the sessions and side conversations at Black Hat this week, the security community is trying hard, but there remains a nagging concern that AI itself may prove to be ultimately ungovernable. During the AI Summit on Tuesday, F5's Herrin was asked what the one thing was that should never be done in AI security. "Trust it," Herrin replied.
[5]
North Korean Hackers Are Using AI to Get Jobs at U.S. Companies and Steal Data
Cyberattacks are getting faster, stealthier, and more sophisticated -- in part because cybercriminals are using generative AI. "We see more threat actors using generative AI as part of their tool chest, and some of those threat actors are using it more effectively than others," says Adam Meyers, head of counter adversary operations at CrowdStrike. The cybersecurity tech company released its 2025 Threat Hunting report on Monday. It detailed, among other findings, that adversaries are weaponizing GenAI to accelerate and scale attacks -- and North Korea has emerged as "the most GenAI-proficient adversary." Within the past 12 months alone, CrowdStrike investigated more than 320 incidents in which operators associated with North Korea fraudulently obtained remote jobs at various companies. That represents a jump of about 220 percent year-over-year. The report suggests operatives used GenAI tools "at every stage of the hiring and employment process" to automate their actions in the job search through the interview process, and eventually to maintain employment. "They use it to create resumes and to create LinkedIn personas that look like attractive candidates that you would want to hire. They use generative AI to answer questions during interviews, and they use deep fake technology as well during those interviews to hide who they are," Meyers says. "Once they get hired, they use that to write code in order to allow them to hold 10, 15, 20 or more jobs at a time." In late July, Arizona woman Christina Chapman was sentenced to eight years in prison for her role in assisting North Korean workers in securing jobs at more than 300 U.S. companies; that generated an estimated $17 million in "illicit revenue," according to the Department of Justice. In late 2023, some 90 laptops were seized from her home. North Korean fraudsters, however, aren't the only threat facing businesses, academic institutions and government agencies. "We're seeing more adversary activity every single day," Meyers says. "There's more and more threat actors engaging in this, and it's not just criminals or hacktivists. We're also seeing more nation states." Although North Korea's attacks may be among the most attention-grabbing, Meyers says "China is probably the number-one threat out there for any Western organization." In the past year, CrowdStrike noted a 40 percent jump in cloud intrusions that it attributed to China-related adversaries. Cloud intrusions overall jumped about 136 percent in the first half of 2025, versus all of the previous year, according to the report. Although the tech industry is the most targeted industry overall, Chinese adversaries substantially ramped up attacks on the telecom sector within the past year, according to the report. "The telecommunications sector is a high-value target for nation-state adversaries, providing access to subscriber and organizational data that supports their intelligence collection and counterintelligence efforts," the report states. As technology becomes more sophisticated, it may seem overwhelming for organizations trying to keep attackers at bay. Meyers counseled individuals on security teams to make use of those very same tools that bad actors are using to fight back. "Generative AI was being used by these threat actors, but it could also be used by the good guys to have more effective defenses," he says. "We have that capability in some of [CrowdStrike's] products, but you can use generative AI to kind of scale up those capabilities within the security team." He also recommended organizations be proactive, rather than reactive to threats. "If you wait for bad stuff to show itself, it's going to be too late," he says. "That's probably one of the biggest takeaways, is that you need to have threat hunting." Just over a year ago, a CrowdStrike update precipitated what has since been called one of history's biggest IT failures. A buggy security update caused Windows devices to crash, affecting a broad swathe of companies in banking, health care and aviation, among others. Delta Air Lines was notably affected and is suing CrowdStrike, alleging the outage caused as many as 7,000 flight cancellations and as much as $550 million in lost revenue and other expenses, Reuters reported. The final deadline for the 2025 Inc. Power Partner Awards is this Friday, August 8, at 11:59 p.m. PT. Apply now.
[6]
CrowdStrike Report Warns of GenAI Weaponization and Rising Attacks on AI Agents in 2025
Based on frontline intelligence from CrowdStrike's elite threat hunters and intelligence analysts tracking more than 265 named adversaries, the report reveals: Adversaries Weaponize AI at Scale: DPRK-nexus adversary FAMOUS CHOLLIMA used GenAI to automate every phase of its insider attack program. From building fake resumes and conducting deepfake interviews to completing technical tasks under false identities - AI-powered adversary tradecraft is transforming traditional insider threats into scalable, persistent operations. Russia-nexus adversary EMBER BEAR used GenAI to amplify pro-Russia narratives and Iran-nexus adversary CHARMING KITTEN deployed LLM-crafted phishing lures targeting U.S. and EU entities.
[7]
2025 CrowdStrike Threat Hunting Report: Adversaries Weaponize and Target AI at Scale
DPRK-nexus adversaries infiltrate 320+ companies using GenAI accelerated attacks; threat actors exploit AI agents, exposing autonomous systems as the next enterprise attack surface Black Hat USA 2025--CrowdStrike (NASDAQ: CRWD) today released the 2025 Threat Hunting Report, highlighting a new phase in modern cyberattacks: adversaries are weaponizing GenAI to scale operations and accelerate attacks - and increasingly targeting the autonomous AI agents reshaping enterprise operations. The report reveals how threat actors are targeting tools used to build AI agents - gaining access, stealing credentials, and deploying malware - a clear sign that autonomous systems and machine identities have become a core part of the enterprise attack surface. CrowdStrike Threat Hunting Report Highlights Based on frontline intelligence from CrowdStrike's elite threat hunters and intelligence analysts tracking more than 265 named adversaries, the report reveals: "The AI era has redefined how businesses operate, and how adversaries attack. We're seeing threat actors use GenAI to scale social engineering, accelerate operations, and lower the barrier to entry for hands-on-keyboard intrusions," said Adam Meyers, head of counter adversary operations at CrowdStrike. "At the same time, adversaries are targeting the very AI systems organizations are deploying. Every AI agent is a superhuman identity: autonomous, fast, and deeply integrated, making them high-value targets. Adversaries are treating these agents like infrastructure, attacking them the same way they target SaaS platforms, cloud consoles, and privileged accounts. Securing the AI that powers business is where the cyber battleground is evolving." CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world's most advanced cloud-native platform for protecting critical areas of enterprise risk - endpoints and cloud workloads, identity and data. Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities. Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value. © 2025 CrowdStrike, Inc. All rights reserved. CrowdStrike and CrowdStrike Falcon are marks owned by CrowdStrike, Inc. and are registered in the United States and other countries. CrowdStrike owns other trademarks and service marks and may use the brands of third parties to identify their products and services.
Share
Copy Link
CrowdStrike's 2025 Threat Hunting report reveals a significant increase in AI-powered cyberattacks, with North Korean hackers emerging as the most proficient in leveraging generative AI for malicious activities.
The cybersecurity landscape is undergoing a dramatic transformation as artificial intelligence (AI) becomes increasingly weaponized by threat actors. CrowdStrike's 2025 Threat Hunting report, released at Black Hat USA 2025, reveals a significant escalation in adversary sophistication, with AI-powered attacks driving a major shift in the threat landscape 13.
Source: TechRadar
North Korean hackers have emerged as the most proficient users of generative AI for malicious purposes. The group known as FAMOUS CHOLLIMA has conducted over 320 insider threat operations in the past year, representing a 220% year-over-year increase 25. These operatives are leveraging AI tools throughout their attack lifecycle, from creating synthetic identities and resumes to generating real-time deepfakes for video interviews and automating technical work across multiple fraudulent job positions 25.
The integration of AI into cyberattacks has led to faster, more aggressive, and more successful intrusions. Key findings from the CrowdStrike report include:
Threat groups like Scattered Spider have demonstrated the ability to move from account compromise to ransomware deployment in just 24 hours, a 32% improvement from their 2024 average 23.
CrowdStrike's Adam Meyers, head of counter adversary operations, expressed particular concern about agentic AI systems, which have become a "core part of the enterprise attack surface" 1. These autonomous AI agents are being targeted by hackers who exploit vulnerabilities in the tools used to build them, potentially gaining access to credentials and deploying malware 14.
Source: SiliconANGLE
The cybersecurity industry is scrambling to address these new AI-driven threats. At Black Hat 2025, several companies unveiled enhanced AI-powered security solutions:
To mitigate these evolving threats, organizations are advised to implement phishing-resistant multi-factor authentication, isolate privileged accounts, strengthen help desk protocols, and deploy continuous monitoring to detect anomalous behavior 35.
Source: DIGITAL TERMINAL
As AI continues to reshape the cyber threat landscape, both attackers and defenders are leveraging its capabilities. While AI presents new challenges, it also offers opportunities for enhanced defense mechanisms. Cybersecurity professionals are urged to embrace AI technologies to scale up their defensive capabilities and adopt proactive threat hunting approaches 5.
The rapid evolution of AI-powered threats underscores the need for ongoing vigilance, adaptation, and collaboration within the cybersecurity community to stay ahead of increasingly sophisticated adversaries.
OpenAI has launched GPT-5 with pricing that matches or undercuts competitors, potentially igniting a price war in the AI industry. The move comes despite massive infrastructure investments by major tech companies.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
As electricity costs increase, states are under pressure to protect consumers from the growing energy demands of Big Tech data centers, with evidence suggesting that these facilities are contributing significantly to higher bills.
5 Sources
Business and Economy
22 hrs ago
5 Sources
Business and Economy
22 hrs ago
Grok Imagine, an AI tool by Elon Musk's company, has been found to generate explicit deepfake videos of celebrities, including Taylor Swift, sparking debates on AI ethics and regulation.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
Pinterest CEO Bill Ready discusses the future of AI in shopping, emphasizing the company's current AI-enabled assistance while downplaying the immediate potential of fully agentic shopping experiences.
2 Sources
Business and Economy
22 hrs ago
2 Sources
Business and Economy
22 hrs ago
Meta has settled a lawsuit with conservative activist Robby Starbuck over AI-generated misinformation, agreeing to collaborate on reducing political bias in AI models.
2 Sources
Policy and Regulation
22 hrs ago
2 Sources
Policy and Regulation
22 hrs ago