Google warns North Korean hackers use AI-generated deepfakes to steal $2.02 billion in crypto

Reviewed byNidhi Govil

2 Sources

Share

Google's Mandiant security team has exposed a sophisticated North Korean malware campaign using AI-generated deepfakes and spoofed Zoom calls to target cryptocurrency companies. The UNC1069 group, also known as CryptoCore, stole $2.02 billion in 2024 alone—a 51% increase from the previous year. These AI-enabled social engineering attacks exploit digital trust through fake video meetings with deepfaked CEOs, marking a dangerous evolution in cybercrime.

Google Exposes Sophisticated AI-Powered Attacks on Crypto Industry

Google's security team at Mandiant has issued an urgent warning about North Korean malware campaigns that leverage AI-generated deepfakes to target cryptocurrency companies and DeFi platforms

1

. The threat actor, identified as UNC1069 or CryptoCore, has evolved its tactics to include AI-enabled social engineering that exploits trust in routine digital interactions

1

. According to the Monday report, these attacks represent a significant shift from mass phishing campaigns to highly tailored operations targeting cryptocurrency companies, venture capital firms, and their executives

1

.

Source: Decrypt

Source: Decrypt

Record-Breaking Cryptocurrency Theft Through Fewer, Smarter Attacks

The scale of North Korean malware operations has reached alarming levels. Blockchain analytics firm Chainalysis reported that North Korean hackers stole $2.02 billion in cryptocurrency in 2024, representing a 51% increase from the previous year

1

. The total amount stolen by DPRK-linked actors now stands at roughly $6.75 billion, even as the number of attacks has declined

1

. This pattern reveals that North Korea is achieving larger cryptocurrency theft through fewer, more targeted incidents that bypass traditional cybersecurity defenses.

How Deepfaked CEOs and Spoofed Zoom Calls Enable Malware Infection

Mandiant's investigation into a recent fintech company intrusion revealed the sophisticated mechanics of these AI-powered attacks

1

. The attack begins when victims receive contact via Telegram from what appears to be a known cryptocurrency executive whose account has already been compromised

1

. After building rapport, the attacker sends a Calendly link directing victims to spoofed Zoom calls hosted on the group's own infrastructure

1

. During these fake video meetings, victims encounter deepfake video of well-known crypto CEOs

1

2

.

Once the meeting begins, attackers claim audio problems and instruct victims to run a malicious troubleshooting program using the ClickFix technique

1

2

. This triggers malware infection, with forensic analysis identifying seven distinct malware families designed to harvest user credentials, browser data, and session tokens for financial theft and future impersonation

1

2

.

Source: PC Gamer

Source: PC Gamer

Digital Trust Exploitation Becomes the New Attack Vector

Fraser Edwards, co-founder and CEO of decentralized identity firm cheqd, explained that these attacks target professionals whose jobs depend on remote meetings and rapid coordination

1

. "The effectiveness of this approach comes from how little has to look unusual. The sender is familiar. The meeting format is routine. There is no malware attachment or obvious exploit. Trust is leveraged before any technical defence has a chance to intervene," Edwards said

1

. This digital trust exploitation represents a fundamental shift in cybercrime tactics, where social engineering bypasses traditional security measures.

AI Tools Enable Scalable Impersonation Attacks

Google's report reveals that UNC1069, active since 2018, has been using Gemini to develop code for cryptocurrency theft and craft fraudulent instructions impersonating software updates to extract user credentials

2

. The AI tool was also employed "to develop tooling, conduct operational research, and assist during the reconnaissance stages"

2

. Gemini is not the only AI tool being weaponized—cybersecurity company Kaspersky claims hacking group BlueNoroff is using GPT-4o to enhance images to convince targets

2

.

Edwards warned that AI is now being used beyond live calls to draft messages, correct tone of voice, and mirror normal communication patterns, making routine messages harder to question

1

. He added that the risk will escalate as AI agents are introduced into everyday communication: "Agents can send messages, schedule calls, and act on behalf of users at machine speed. If those systems are abused or compromised, deepfake audio or video can be deployed automatically, turning impersonation from a manual effort into a scalable process"

1

.

What Organizations Targeting Cryptocurrency Companies Should Watch

Mandiant observed that these attacks serve a dual purpose: enabling immediate cryptocurrency theft while fueling future social engineering campaigns by leveraging victims' identity and data

2

. This creates a compounding threat where each successful breach enables more convincing subsequent attacks. Edwards emphasized that expecting users to spot deepfakes is "unrealistic," stating: "The answer is not asking users to pay closer attention, but building systems that protect them by default. That means improving how authenticity is signalled and verified, so users can quickly understand whether content is real, synthetic, or unverified without relying on instinct, familiarity, or manual investigation"

1

. As AI-powered attacks grow more sophisticated, the crypto industry faces an urgent need to implement verification systems that can counter scalable impersonation attacks before digital trust becomes an insurmountable vulnerability in DeFi and broader cybersecurity infrastructure.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo