2 Sources
[1]
AI-powered crypto hacks drain $600M from DeFi as North Korea exploits surge
The two hacks came a little over two weeks apart. On 1 April, attackers drained roughly $285 million from Drift Protocol, a Solana-based derivatives exchange, after spending months posing as a quantitative trading firm to trick employees into authorising malicious transactions. On 18 April, a separate group exploited a single-verifier flaw in Kelp DAO's cross-chain bridge and extracted approximately $292 million in wrapped ether. Between them, the heists netted almost $600 million, and, according to blockchain forensics firm TRM Labs, accounted for 76% of all crypto hack losses in 2026 so far. Both attacks are widely attributed to North Korea-linked groups, according to Bloomberg . What most alarmed cybersecurity researchers, however, was not the scale but the method. TRM investigator Nick Carlsen, a former FBI analyst who specialises in North Korean crypto crime, said the sophistication of the April heists makes it highly likely the attackers used artificial intelligence to select targets and design exploits. "This is all stuff North Korea never used to do," he said. The Drift hack was devastating for the platform itself. The attackers manufactured a fictitious token, built an inflated trading record to make it appear legitimate, and used it as collateral to drain real assets in roughly 12 minutes. Drift's total value locked collapsed from $550 million to under $300 million within an hour. The exchange shut down and is now planning to relaunch after securing a roughly $148 million rescue package led by stablecoin issuer Tether. A smaller DeFi project called Carrot, which had routed user funds through Drift-integrated vaults, announced on 30 April that it was shuttering entirely. The Kelp DAO hack was worse in a different way. Rather than selling the stolen funds immediately, the attackers deposited roughly $200 million of the proceeds as collateral on Aave, the largest decentralised lending protocol. That triggered a crisis of confidence: depositors, fearing the collateral backing Aave might be worthless, pulled roughly $9 billion from the platform in two days. Total value locked across all DeFi lending protocols dropped by more than $13 billion in 48 hours. Aave ended up needing a rescue of its own. The episode illustrated a structural vulnerability that distinguishes decentralised finance from traditional banking. Transactions over blockchains cannot be reversed. There is no central authority to freeze suspicious transfers before they settle. And the interconnected nature of DeFi protocols, where one platform's collateral is another's liability, means a single exploit can cascade through an ecosystem of roughly $130 billion in locked assets. Determining whether hackers used AI is not an exact science. Investigators draw conclusions based on the sophistication of an attack, the methods employed, and the speed with which targets were identified. More than half a dozen cybersecurity researchers interviewed by Bloomberg said the abrupt rise in DeFi exploits -- April saw a record 28 to 30 incidents, almost doubling the previous high, is itself a clear indicator that attackers are deploying widely available AI models. "With AI, the cost of vulnerability detection is trending to zero," said Aneirin Flynn, chief executive of security audit firm Failsafe. The time it takes for hackers to identify a weakness in a blockchain protocol has been compressed from months to days or even hours, he said. Anthropic's own research supports the premise. In December, the company published a study showing that more than half of blockchain exploits carried out in 2025 "could have been done autonomously" using AI agents. What the researchers called "potential exploit revenue" had been doubling every 1.3 months, and the average cost of scanning a smart contract for vulnerabilities had fallen to $1.22. A separate test by engineers at a16z, the largest crypto venture capital firm, found that an AI trained on past DeFi hacks "always found the vulnerability" in a given protocol, though it could not yet fully design a profitable exploit without human assistance. Hanging over the industry is Anthropic's Mythos, the AI model the company has withheld from wide release because of its cybersecurity capabilities. In testing, Mythos autonomously discovered thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser, including a flaw in OpenBSD that had gone undetected for 27 years. Anthropic chose to limit access to a handful of major technology companies and banks through what it calls Project Glasswing, rather than releasing the model publicly. There is no evidence that the April hackers had access to Mythos. But the model's existence underscores a broader anxiety: if existing, publicly available AI tools are already capable of accelerating crypto heists to this degree, what happens when more powerful models, whether Mythos or its successors, inevitably leak or are replicated? In November, Anthropic disclosed that attackers had manipulated its Claude model to target roughly 30 entities including technology companies, financial institutions, and government agencies, succeeding in a small number of cases. In April, reports emerged that unauthorised users had gained access to the restricted Mythos model itself. The urgency to respond is mounting. Failsafe's Flynn said several clients are installing software that continuously scans devices connected to a network and alerts managers to suspicious patterns. Yuan Han Li, a partner at crypto venture firm Blockchain Capital, has called for circuit breakers that would pause or limit transactions beyond a certain threshold. Jupiter, a Solana-based trading venue, is rolling out a similar mechanism more widely. Aave is expanding its risk framework for collateral to include cybersecurity factors, according to its chief legal and policy officer, Linda Jeng. But TRM's Carlsen argues that purely defensive measures are ultimately insufficient against state-backed attackers armed with AI. "You don't win this kind of campaign playing defense," he said. The only viable response, in his view, is to turn the hackers' own methods against them and pursue the stolen funds aggressively. "They need to be hacked." The crypto industry has lost billions to exploits over the past several years, and North Korea's share of global hack losses has risen from below 10% in 2020 to 76% through April 2026, according to TRM Labs. The Drift and Kelp DAO heists suggest the threat is not plateauing. It is accelerating, and the defenders are still catching up.
[2]
AI Cyber Threats Shake Crypto Industry | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Following the hacks, which netted the attackers a total of almost $600 million, Drift shut down and plans to relaunch after receiving stablecoins from Tether; a decentralized finance (DeFi) project called Carrot that had exposure to Drift shut down permanently; and lending protocol Aave, which was used to launder proceeds from one of the hacks, needed a rescue after investors pulled $9 billion, according to the report. What has alarmed the industry most about these hacks is that they were likely supported by AI, the report said. While that cannot be proven, cybersecurity experts said in the report that the attacks had become so much more sophisticated, so quickly, that the hackers behind them were probably helped by AI. Beyond that, there is the looming presence of Anthropic's Mythos AI model, which the company has kept in limited release due to the cybersecurity risks it poses, as well as the likelihood that hackers will obtain other powerful AI models. Cybersecurity experts said in the report that AI can help hackers find weaknesses in a blockchain protocol in days or hours, rather than months, and can give anyone the skills of an elite hacker. Crypto firms' responses to the threat of AI include adding software that scans devices connected to a network to detect potential threats; installing circuit breakers that pause or limit transactions above a certain threshold; and, for DeFi lenders, expanding the risk framework for collateral to include cybersecurity factors, per the report. In an update Drift provided in April while the attack on its crypto exchange was underway, the company said: "This was a highly sophisticated operation that appears to have involved multi-week preparation and staged execution, including the use of double nonce accounts to pre-sign transactions that delayed execution." In the Kelp DAO hack, it was reported in April that this action highlighted the risks of interconnected systems in DeFi because the failure of one piece can threaten the entire structure.
Share
Copy Link
Two devastating attacks in April 2026 drained nearly $600 million from cryptocurrency platforms, with cybersecurity experts attributing the unprecedented sophistication to AI-powered hacking tools. The Drift Protocol and Kelp DAO breaches triggered cascading failures across DeFi, forcing a $9 billion withdrawal from Aave and prompting emergency responses across the cryptocurrency industry.
The cryptocurrency industry faced its most severe test yet when two major AI-powered crypto hacks drained almost $600 million within weeks of each other in April 2026. On April 1, attackers extracted roughly $285 million from Drift Protocol, a Solana-based derivatives exchange, followed by a $292 million breach of Kelp DAO's cross-chain bridge on April 18
1
. Together, these blockchain exploits accounted for 76% of all crypto hack losses in 2026 so far, according to blockchain forensics firm TRM Labs1
.Both attacks are widely attributed to North Korea-linked groups, but what alarmed cybersecurity experts most was not the scale but the method. TRM investigator Nick Carlsen, a former FBI analyst specializing in North Korean crypto crime, said the sophistication of the April heists makes it highly likely the attackers used artificial intelligence to select targets and design exploits. "This is all stuff North Korea never used to do," he noted
1
.
Source: PYMNTS
The Drift hack demonstrated the devastating potential of AI-supported attacks. Attackers spent months posing as a quantitative trading firm to trick employees into authorizing malicious transactions
1
. They manufactured a fictitious token, built an inflated trading record to make it appear legitimate, and used it as collateral to drain real assets in roughly 12 minutes. In an update provided during the attack, Drift described it as "a highly sophisticated operation that appears to have involved multi-week preparation and staged execution, including the use of double nonce accounts to pre-sign transactions that delayed execution"2
.Drift's total value locked collapsed from $550 million to under $300 million within an hour
1
. The exchange shut down and is now planning to relaunch after securing a roughly $148 million rescue package led by stablecoin issuer Tether1
2
. A smaller DeFi project called Carrot, which had routed user funds through Drift-integrated vaults, announced on April 30 that it was shuttering entirely1
.The Kelp DAO hack triggered an even wider crisis. Rather than selling the stolen funds immediately, the attackers deposited roughly $200 million of the proceeds as collateral on Aave, the largest decentralized lending protocol
1
. This move sparked a crisis of confidence: depositors, fearing the collateral backing Aave might be worthless, pulled roughly $9 billion from the platform in two days1
2
. Total value locked across all DeFi lending protocols dropped by more than $13 billion in 48 hours, and Aave ended up needing a rescue of its own1
.The episode illustrated a structural vulnerability that distinguishes decentralized finance from traditional banking. Transactions over blockchains cannot be reversed, there is no central authority to freeze suspicious transfers before they settle, and the interconnected nature of DeFi protocols means a single exploit can cascade through an ecosystem of roughly $130 billion in locked assets
1
.Determining whether hackers used AI is not an exact science, but more than half a dozen cybersecurity experts interviewed said the abrupt rise in DeFi exploits serves as a clear indicator that attackers are deploying widely available AI models
1
. April 2026 saw a record 28 to 30 incidents, almost doubling the previous high1
."With AI, the cost of vulnerability detection is trending to zero," said Aneirin Flynn, chief executive of security audit firm Failsafe. The time it takes for hackers to identify a weakness in a blockchain protocol has been compressed from months to days or even hours
1
2
.Related Stories
Anthropic's own research supports the premise. In December, the company published a study showing that more than half of blockchain exploits carried out in 2025 "could have been done autonomously" using AI agents
1
. What researchers called "potential exploit revenue" had been doubling every 1.3 months, and the average cost of scanning a smart contract for vulnerabilities had fallen to $1.221
. A separate test by engineers at a16z, the largest crypto venture capital firm, found that an AI trained on past DeFi hacks "always found the vulnerability" in a given protocol, though it could not yet fully design a profitable exploit without human assistance1
.Hanging over the industry is Anthropic's Mythos, the AI model the company has withheld from wide release because of its cybersecurity capabilities
1
2
. In testing, Mythos autonomously discovered thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser, including a flaw in OpenBSD that had gone undetected for 27 years1
. Anthropic chose to limit access to a handful of major technology companies and banks through what it calls Project Glasswing, rather than releasing the model publicly1
.There is no evidence that the April hackers had access to Mythos, but the model's existence underscores a broader anxiety: if existing, publicly available AI tools are already capable of accelerating crypto heists to this degree, what happens when more powerful models inevitably leak or are replicated
1
?Crypto firms are responding to AI cyber threats with multiple defensive measures. Companies are adding threat detection software that scans devices connected to a network to detect potential threats
2
. Platforms are installing transaction circuit breakers that pause or limit transactions above a certain threshold2
. For DeFi lenders, the response includes expanding the risk framework for collateral to include cybersecurity factors2
. These measures aim to create layers of protection against attacks that can now be executed in hours rather than months, as AI agents give anyone the skills of an elite hacker2
.Summarized by
Navi
15 Apr 2026•Technology

07 Apr 2026•Technology

10 Feb 2026•Technology

1
Technology

2
Technology

3
Business and Economy
