2 Sources
2 Sources
[1]
Deepfaked calls hit 44% of businesses in last year: Gartner
A survey of cybersecurity bosses has shown that 62 percent reported attacks on their staff using AI over the last year, either by the use of prompt injection attacks or faking out their systems using phony audio or video generated by AI. The most common attack vector is deepfake audio calls against staff, with 44 percent of businesses reporting at least one instance of this happening, six percent of which resulted in business interruption, financial loss, or intellectual property loss. Those loss rates drop to two percent when an audio screening service is used. For video deepfakes, the figure was slightly lower, 36 percent, but still five percent of those also caused a serious problem. The problem is that deepfake audio is getting too convincing and cheap, Chester Wisniewski, global field CISO of security biz Sophos, told The Register. "With audio you can kind of generate these calls in real time at this point," he said. "If it was your spouse they could tell, but if it's just a co-worker you talk to occasionally, you pretty much can do that in real time now without much pause, which is a real challenge." He believes the audio deepfake figures could be underestimating the problem, but said the results were higher than he expected for video. Doing a real-time video fake of a specific individual is incredibly expensive, he said, running into millions of dollars in costs. However, Sophos has seen cases in which a scammer briefly runs a CEO or CFO video deepfake on a WhatsApp call before claiming connectivity issues, deleting the video feed, and moving to text communication to carry on a social-engineering attack. More common are generic video fakes used to conceal a person's identity, not steal it. North Korea, for example, is earning millions by farming out its staff to Western companies using AI fakery, and they can be very convincing, even for professionals. The other type of AI-generated attack on the rise is the prompt-injection attack, where attackers embed malicious instructions into content that an AI system processes, tricking it into revealing sensitive information or misusing connected tools, potentially leading to code execution if integrations allow it. According to the Gartner survey, 32 percent of respondents said they'd had prompt injection attacks against their applications. We've already seen Google's Gemini chatbot being used to target individual users' email and even "smart" home systems. Anthropic's Claude has also had prompt injection problems, and ChatGPT has been shown by researchers to be tricked into solving CAPTCHAs that are supposed to spot a machine, rather than a human operator, or into behavior that could be abused to generate denial-of-service-style traffic against websites. ®
[2]
Watch out - even small businesses are now facing threats from deepfake attacks
Prompt injection is also giving criminals access to sensitive company information Gartner says even small businesses are facing a spike in cybercrime, and AI could be to blame - more than three-fifths (62%) of organizations reporting AI-driven attacks in the past year. The firm's study found three in five (62%, again) experienced deepfake attacks, with 44% experiencing deepfake audio attacks, making this the most common attack vector compared with video deepfakes (36%). Prompt-injection attacks against AI tools (32%) and attacks on enterprise generative AI application infrastructure (29%) were also noted, showing how AI isn't just being used to strengthen crime, but it's also serving as a useful vulnerability for many criminals. "As adoption accelerates, attacks leveraging GenAI for phishing, deepfakes and social engineering have become mainstream, while other threats - such as attacks on GenAI application infrastructure and prompt-based manipulations - are emerging and gaining traction," Gartner VP Analyst Akif Khan explained. The report details how rapid AI development has seen deepfakes go from complex to instant, with audio deepfakes now being generated in real time to make them highly convincing and personalized. Although real-time, person-specific deepfakes remain very expensive, only time stands between limited use and widespread use. On the field, cybersecurity firms and analysts are seeing deepfakes being used as an initial attack vector, before attackers revert to simpler and cheaper methods. For example, scammers sometimes fake a CEO on a call before switching to text-only social engineering methods. When it comes to exploiting companies' AI systems, attackers are frequently observed tricking systems into revealing sensitive information or abusing integrations to execute code by giving malicious prompts. Looking ahead, companies of all sizes - not just multinational enterprises - are being advised to up their game, with the zero-trust approach emerging as a firm favorite to block out unwarranted activity.
Share
Share
Copy Link
A Gartner survey reveals that 62% of organizations faced AI-driven attacks in the past year, with deepfake audio emerging as the top threat. Small businesses are increasingly targeted, and prompt injection attacks are on the rise, highlighting the need for enhanced cybersecurity measures across all sectors.
In a startling revelation, a recent Gartner survey has uncovered that 62% of organizations reported AI-driven attacks in the past year, with deepfake audio emerging as the most prevalent threat vector
1
2
. The study found that 44% of businesses experienced at least one instance of deepfake audio calls, while 36% encountered video deepfakes1
2
.Chester Wisniewski, global field CISO of Sophos, explained the growing concern: "With audio, you can kind of generate these calls in real time at this point. If it was your spouse they could tell, but if it's just a co-worker you talk to occasionally, you pretty much can do that in real time now without much pause, which is a real challenge."
1
The impact of these attacks is significant, with 6% of audio deepfake incidents resulting in business interruption, financial loss, or intellectual property theft
1
. However, the use of audio screening services can reduce this loss rate to 2%, highlighting the importance of implementing protective measures1
.While less common than audio deepfakes, video-based attacks are still a significant threat. Sophos has observed instances where scammers briefly use CEO or CFO video deepfakes on messaging platforms before claiming connectivity issues and switching to text communication to continue their social engineering attacks
1
.Beyond deepfakes, prompt injection attacks are emerging as a serious concern. The Gartner survey revealed that 32% of respondents experienced prompt injection attacks against their applications
1
2
. These attacks involve embedding malicious instructions into content processed by AI systems, potentially leading to the disclosure of sensitive information or misuse of connected tools1
.Related Stories
Contrary to popular belief, AI-driven attacks are not limited to large corporations. Gartner's study emphasizes that even small businesses are now facing threats from deepfake attacks and other AI-powered cybercrime tactics
2
. This shift in the threat landscape underscores the need for organizations of all sizes to enhance their cybersecurity measures.As AI technology continues to advance rapidly, the sophistication and accessibility of deepfake tools are expected to increase. While person-specific, real-time video deepfakes remain expensive to produce, experts anticipate that this barrier will diminish over time
2
. The cybersecurity community is observing a trend where attackers use deepfakes as an initial vector before transitioning to simpler, cost-effective methods2
.In light of these emerging threats, cybersecurity experts recommend that companies of all sizes strengthen their defenses. The zero-trust approach is gaining popularity as an effective strategy to block unauthorized activity
2
. Additionally, implementing audio screening services and staying informed about the latest AI-driven attack methods can help organizations mitigate risks and protect their assets in an increasingly complex threat landscape.Summarized by
Navi
[1]