2 Sources
[1]
78% of CISOs see AI attacks already
AI attacks are keeping most practitioners up at night, says Darktrace, and with good reason Sponsored feature From the written word through to gunpowder and email, whenever an enabling technology comes along, you can be sure someone will be ready to use it for evil. Most tech is dual-use, and AI is no exception. On the one side are people using it to find powerful new medicines. On the other, automatically generated phishing emails. And the same is true in enterprise security. Cyber criminals are using it to produce faster, more sophisticated attacks. And because cybersecurity is a zero-sum game where only one side wins, security leaders must respond with equally adaptive, AI-augmented defenses to stay ahead of the risks. In its State of AI Cybersecurity 2025 report, AI cybersecurity vendor Darktrace asked 1,500 cybersecurity IT professionals around the world how worried they are about the AI risk. A full 74 percent see it posing a challenge to their organizations already. Around nine in ten practitioners expect that impact to persist in the medium to long term. Generative AI is fanning the flames, particularly in social engineering attacks. In 2023, as ChatGPT gained traction, novel social engineering attacks targeting users of Darktrace's AI-based email protection system grew 135 percent. It isn't clear exactly how CISOs know that the tide of AI attacks is rising. AI algorithms don't announce themselves, after all. But there has been a lot of media attention on the issue. Stories of attackers using jail-broken or fine-tuned LLMs to craft social engineering attacks are also rife. Some attack tool kits now come with their own chat assistants. The use of AI-powered malware, along with lateral movement tactics using these algorithms, is also reportedly on the rise. Looking for AI attacks is a little like searching for black holes. You can't see them directly but you can infer their existence from their effect on their surrounding environment. "You might face increasing sophistication in phishing attempts or in attacks that are targeting you, or in the types of malware that you're reading about or seeing yourself," says Hanah-Marie Darley, director of security and AI strategy at Darktrace. "Quite often, it will be very difficult apart from that increase in sophistication, to say for certainty whether AI was involved." What we do know is that intelligence agencies are worried enough to warn about AI-driven attacks. At the RSA conference this year the FBI said that China is using AI to hone its attack chains. While AI-powered attackers are shooting to score, many security pros are still lacing up their boots. This year, 45 percent of survey participants said they don't feel prepared for what's coming. While that's down from 60 percent last year, it's still not great, and only 17 percent feel very prepared. Cybersecurity skills were a point of some contention in the Darktrace report. It found that the biggest barrier to preparing for cybersecurity AI-mageddon was a lack of personnel. There just aren't enough people to manage the torrent of alerts produced by the average organization's cybersecurity tooling. Over seven in ten of the organizations surveyed reported that they have at least one unfilled cybersecurity position. But don't worry. We can just throw more wet-behind-the-ears graduates at the security operations center (SoC) to solve the problem. Right? Well, there's the rub; according to the data, companies aren't even trying. Hiring more staff was the survey base's lowest priority for next 12 months, at just 11 percent. Darley also believes that cybersecurity roles tend to chew through a lot of people because they're so intense. "If you're not in an incident, you're looking for one," she says. "In psychology terms, we would call that a state of poly crisis. So you're in back-to-back crises, which means that you're almost always in a stress state." Companies might be finding it so difficult to hire and retain the right people that they've thrown in the towel. Regardless of why businesses aren't investing in staff, this failure to cross the skills chasm leaves a gap. It's one that they believe AI-powered cybersecurity solutions can fill. A full 95 percent of respondents believe AI can improve the speed and efficiency of their cyber defenses, and 88 percent are already seeing significant time savings from AI solutions of one type of another. This doesn't mean that companies don't have their reservations about AI. The kind of data that AI solutions analyze is sensitive, which is why 82 percent of respondents were intent on AI solutions that do not require external data sharing. That reflects increasing concerns about model training leaks, AI governance, and compliance with regulations like GDPR and the EU AI Act. Organizations might know what they want, but this doesn't mean that they understand it entirely. The Darktrace research found that only 42 percent of respondents know exactly what types of AI used in their cybersecurity stack. To some extent, it's understandable that they just want AI to produce the result without knowing all about it. After all you don't necessarily need to know how a car engine works to get you to the office. However, you do want the right car engine for the job. A V8 engine isn't the right choice for a commute through London's crowded streets, for example. In the same way, an understanding of the different AI types and how they're suited for specific tasks is useful to ensure you use it in the right way for defense. Perhaps that's why just 73 percent feel confident in their team's ability to use AI-powered tools effectively. Unfortunately, many respondents to the survey overestimate generative AI's role in cybersecurity, possibly because they conflate its transformer-based LLM models with more classic types of AI. Almost two thirds believe their cybersecurity tools use only or mostly generative AI, though this probably isn't true. It's understandable, because both use neural networks. However the underlying mechanics and the capabilities of these approaches differ. Organizations might not always understand that they don't want generative AI, but they definitely know what results they're looking for from AI. They're fed up with tools that only react to cybersecurity threats after the fact, with 88 percent stating that AI helps them to adopt a more preventative defense stance. Another common ask is to replace point solutions with integrated cybersecurity platforms; 89 percent prefer the latter. A lack of interoperability often leaves point solutions pieced together with chewing gum and sticky tape. SoC staff might exchange data between them manually or frantically throw together scripts to try and automate things. That's not the scenario you want as an incident response team battling a fast-moving attack. The lack of awareness around the precise mechanics of AI security technologies and the drive to integrate security solutions have something important in common: the need for simplicity. Businesses don't need to know exactly how something works, and they don't need to see how point products bolt together either. They really just want something that protects them as simply and effectively as possible. "The best AI solutions are really understanding the problem that you're trying to solve and then choosing the right technique," says Darley. "That doesn't always mean adding more complexity." She describes the ideal solution as multi-layered, using a range of techniques and AI models to counter a series of discrete threats. That combination offers ubiquitous protection. This is Darktrace's unique sales proposition. The Darktrace ActiveAI Security Platform uses a mixture of supervised, unsupervised, and statistical machine learning models, integrated together into its Self-Learning AI engine. The engine detects potential threats while also looking for weaknesses in cybersecurity controls before attackers can exploit them. For example, a recently introduced firewall rule analysis feature helps seal any loopholes to stave off intruders. The Darktrace platform correlates and investigates security incidents across multiple environments and applications, ranging from cloud computing instances through to email systems, networks, endpoints, and operational systems. The various AI models enable it to excel at novel threat detection in ways that more traditional solutions can't, by spotting not just telltale signatures or known suspicious behaviors but also deviations from baseline norms. The latter could indicate legitimate threats that haven't yet been seen in the wild. The multi-layered AI system also enables Darktrace to react to these threats with a level of automation set by the user. Those who want complete control can rely on Darktrace's Cyber AI Analyst capability to triage alerts and focus only on the meaningful ones while providing valuable context for human analysts. Those who want a more hands-off approach can switch on autonomous security functions that enable things like automatic quarantining. The Darktrace report paints a picture of security professionals in a game of blind man's bluff: they are blindfolded, unable to see exactly which attacks are AI-powered or where they are coming from, but painfully aware that they are lurking just out of view, waiting to strike. As the threat actors become more adept at using this technology, defenders must move quickly to match their pace and harden their defenses.
[2]
Overcoming the adoption fear: have you put your trust in the machine?
The relationship between cybersecurity and machine learning (ML) began with an ambitious, yet simple, idea. Harness everything algorithms have to offer and use it to identify patterns in vast datasets. Prior to this, traditional threat detection relied heavily on signature-based techniques - effectively digital fingerprints of known threats. These methods, while helpful against familiar malware, struggled to meet the demand of the increasingly sophisticated tactics of cybercriminals and zero-day attacks. In the end, this created a gap, which led to a wave of interest in using ML to identify anomalies, recognize patterns indicative of malicious behavior, and essentially predict attacks before they could fully wreak havoc. Some of the earliest successful applications of ML in the space included anomaly-based intrusion detection systems (IDS) and spam detection. These early iterations relied heavily on observed learning, where historical data - both malicious and benign - was fed to algorithms to help them differentiate between the two. Over time, ML-powered applications grew to incorporate unsupervised learning and even reinforcement learning to adapt to the changing nature of the present threats. Recently, conversation has shifted to the introduction of large language models (LLM) like GPT-4. These models excel at summarizing reports, synthesizing large volumes of information, and generating natural language content. In the cybersecurity industy, they've been used to generate executive summaries and parse through threat intelligence feeds. Both of which require handling vast amounts of data and presenting it in an easy-to-understand form. In line with this, we've seen the concept of a "copilot for security" surface - a tool intended to assist security analysts like a coding copilot helps a developer. The AI-powered copilot would act as a virtual Security Operations Center (SOC) analyst. Ideally, it would not just handle vast amounts of data and present it in a comprehendible way but also sift through alerts, contextualize incidents, and even propose follow up actions. However, the ambition has fallen short. Whilst they show promise in specific workflows, LLMs have yet to deliver an indispensable and transformative use case for SOC teams. Undoubtedly, cybersecurity is intrinsically contextual and complex. Analysts piece together fragmented information, understand the broader implications of a threat, and make decisions that require a nuanced understanding of their organization. All under immense pressure. These copilots can neither replace the expertise of a seasoned analyst nor effectively address the glaring pain points that they face. This is because they lack the situational awareness and deep understanding needed to make critical decisions. This means that rather than serving as a dependable virtual analyst, these tools have often become a "solution looking for a problem." Adding yet another layer of technology that analysts need to understand and manage, without delivering equal value. As it stands, current implementations of AI are struggling to get into their groove. But, if businesses are going to properly support their SOC analysts, how do we bridge this gap? The answer could lie in the development of agentic AI - systems capable of taking proactive independent actions, helping to combine automation and autonomy. Its introduction will help transform AI from a passive handy assistant to a crucial member of the SOC team. By potentially allowing AI-driven entities to actively defend systems, engage in threat hunting, and adjust to novel threats without the constant need for human direction agentic AI offers a promising step forward for defensive cybersecurity. For example, instead of waiting for an analyst to issue commands, agentic AI could act on its own: isolating a compromised endpoint, rerouting network traffic, or even engaging in deception techniques to mislead attackers. Despite this potential, organizations have often been slow in adopting new autonomous security technology that can act on its own. And this uncertainty may be well founded. Nobody wants to stop a senior executive from using their laptop based on a false alert or cause an outage in production. However, with the relationship between ML and cybersecurity set to continue developing, businesses mustn't be deterred. Attackers don't have this barrier to overcome. Without a second thought, they will use AI to disrupt, steal and extort their selected targets. This year, it appears organizations will likely face the bleakest threat landscape to date, driven by a malicious use of AI. Consequently, the only way for businesses to combat this will be to be to join the AI arms race - using agentic AI to backup overwhelmed SOC teams. This can be accomplished through autonomous proactive actions, which can enable organizations to actively defend systems, engage in threat hunting and adapt to unique threats without requiring human intervention. We've featured the best malware removal.
Share
Copy Link
As AI-driven cyber attacks increase, cybersecurity professionals are turning to advanced AI solutions, including agentic AI, to combat threats and support overwhelmed security teams.
In the rapidly evolving landscape of cybersecurity, artificial intelligence (AI) has emerged as a double-edged sword. While it offers powerful tools for defense, it also enables more sophisticated attacks. According to Darktrace's State of AI Cybersecurity 2025 report, 78% of Chief Information Security Officers (CISOs) are already witnessing AI-driven attacks, with 74% of cybersecurity professionals considering AI a current challenge to their organizations 1.
Source: TechRadar
The rise of generative AI has particularly fueled concerns, with novel social engineering attacks targeting users of AI-based email protection systems growing by 135% in 2023 1. This surge in AI-powered threats has caught the attention of intelligence agencies, with the FBI warning about China's use of AI to refine its attack strategies 1.
Despite the increasing threat, many organizations find themselves ill-prepared to face AI-driven attacks. The Darktrace report reveals that 45% of survey participants don't feel ready for what's coming, while only 17% feel very prepared 1. This lack of preparedness is largely attributed to a shortage of cybersecurity personnel, with over 70% of surveyed organizations reporting at least one unfilled cybersecurity position 1.
Interestingly, hiring more staff is not a priority for most organizations, with only 11% planning to do so in the next 12 months 1. Instead, companies are turning to AI-powered cybersecurity solutions to bridge this gap. A staggering 95% of respondents believe AI can improve the speed and efficiency of their cyber defenses, with 88% already experiencing significant time savings from AI solutions 1.
As traditional AI solutions struggle to fully address the complex needs of Security Operations Center (SOC) teams, a new concept is emerging: agentic AI. This advanced form of AI aims to combine automation and autonomy, transforming AI from a passive assistant to an active member of the SOC team 2.
Agentic AI has the potential to take proactive, independent actions in defending systems, engaging in threat hunting, and adapting to novel threats without constant human direction. For instance, it could autonomously isolate compromised endpoints, reroute network traffic, or employ deception techniques to mislead attackers 2.
Source: The Register
Despite the potential benefits, organizations have been slow to adopt autonomous security technology due to concerns about false positives and potential disruptions 2. However, as the threat landscape continues to evolve, with attackers readily embracing AI for malicious purposes, businesses are urged to overcome these adoption fears.
Experts argue that joining the "AI arms race" in cybersecurity is crucial for organizations to effectively combat the increasing sophistication of AI-driven threats 2. By leveraging agentic AI and autonomous proactive actions, businesses can empower their overwhelmed SOC teams and adapt to unique threats without constant human intervention.
As the relationship between machine learning and cybersecurity continues to develop, the industry is likely to see further advancements in AI-powered defense systems. While current implementations of AI, including large language models (LLMs), have shown promise in specific workflows like summarizing reports and parsing threat intelligence feeds, they have yet to deliver a truly transformative solution for SOC teams 2.
The future of AI in cybersecurity lies in developing more contextually aware and autonomous systems that can effectively address the complex, nuanced challenges faced by security analysts. As organizations navigate this evolving landscape, striking a balance between leveraging AI's potential and maintaining human expertise will be crucial in building robust, adaptive cybersecurity defenses.
Summarized by
Navi
[1]
Nvidia prepares to release its Q1 earnings amid high expectations driven by AI demand, while facing challenges from China export restrictions and market competition.
4 Sources
Business and Economy
15 hrs ago
4 Sources
Business and Economy
15 hrs ago
OpenAI has updated its Operator AI agent with the more advanced o3 model, improving its reasoning capabilities, task performance, and safety measures. This upgrade marks a significant step in the development of autonomous AI agents.
4 Sources
Technology
22 hrs ago
4 Sources
Technology
22 hrs ago
Nvidia CEO Jensen Huang lauds President Trump's re-industrialization policies as 'visionary' while announcing a partnership to develop AI infrastructure in Sweden with companies like Ericsson and AstraZeneca.
4 Sources
Business and Economy
14 hrs ago
4 Sources
Business and Economy
14 hrs ago
Wall Street anticipates Nvidia's earnings report as concerns over rising Treasury yields and federal deficits impact the market. The report is expected to reflect significant growth in AI-related revenue and could reignite enthusiasm for AI investments.
2 Sources
Business and Economy
23 hrs ago
2 Sources
Business and Economy
23 hrs ago
The US House of Representatives has approved President Trump's "One Big Beautiful Bill," which includes a contentious provision to freeze state-level AI regulations for a decade, sparking debate over innovation, safety, and federal-state power balance.
2 Sources
Policy and Regulation
22 hrs ago
2 Sources
Policy and Regulation
22 hrs ago