Microsoft Unmasks Cybercriminals Behind AI Deepfake Network in Landmark Lawsuit

Curated by THEOUTPOST

On Fri, 28 Feb, 8:03 AM UTC

7 Sources

Share

Microsoft has identified four key members of a global cybercrime network who allegedly bypassed AI safety measures to create and distribute harmful content, including celebrity deepfakes.

Microsoft Identifies Key Players in AI Abuse Scheme

In a significant development in the fight against AI misuse, Microsoft has amended a lawsuit to name four individuals allegedly involved in a global cybercrime network known as Storm-2139. The tech giant accuses these developers of bypassing safety guardrails on its AI tools to generate harmful content, including celebrity deepfakes 1.

The named defendants are:

  • Arian Yadegarnia aka "Fiz" from Iran
  • Alan Krysiak aka "Drago" from the United Kingdom
  • Ricky Yuen aka "cg-dot" from Hong Kong
  • Phát Phùng Tấn aka "Asakuri" from Vietnam

The Storm-2139 Network Structure

Microsoft's investigation revealed that Storm-2139 operates with a three-tiered structure 2:

  1. Creators: Developed tools to bypass AI safeguards
  2. Providers: Modified and distributed these tools
  3. Users: Employed the tools to generate violating content

The network allegedly exploited exposed customer credentials to access Microsoft's generative AI services, then altered their capabilities to create and sell access for generating illicit content 3.

Legal Action and Its Impact

Microsoft's lawsuit, filed in December 2024 in the Eastern District of Virginia, initially listed the defendants as "John Does." The recent amendment naming specific individuals marks a significant escalation in the company's efforts to combat AI misuse 4.

The legal action has already had notable effects:

  1. Disruption of operations: A court order allowed Microsoft to seize a key website, hampering the network's activities.
  2. Internal conflict: The legal pressure reportedly caused Storm-2139 members to turn on each other.
  3. Deterrence: Microsoft aims to discourage others from similar activities through this high-profile case.

Broader Implications for AI Safety

This case highlights the ongoing challenges in ensuring the responsible use of AI technologies. While companies like Microsoft implement safety measures, determined actors can still find ways to circumvent these protections 5.

The incident has reignited debates within the AI community about the best approaches to AI safety:

  1. Closed vs. Open-source models: Some argue that keeping models closed can prevent abuse, while others believe open-source development is crucial for innovation and security.
  2. Legal frameworks: The case underscores the need for robust legal mechanisms to address AI-related crimes, with some countries already implementing laws against deepfake creation and distribution.

Ongoing Investigations and Future Actions

Microsoft has indicated that its investigations are ongoing, with at least two additional suspects located in the United States. The company is preparing criminal referrals to both U.S. and foreign law enforcement agencies, signaling a multi-pronged approach to combating AI abuse 3.

As the legal proceedings unfold, this case is likely to set important precedents for how tech companies and legal systems address the misuse of AI technologies in an increasingly AI-driven world.

Continue Reading
Microsoft Takes Legal Action Against Cybercriminals

Microsoft Takes Legal Action Against Cybercriminals Exploiting Azure AI Services

Microsoft has filed a lawsuit against a group of cybercriminals who developed tools to bypass AI safety measures and generate harmful content using Azure OpenAI services.

SiliconANGLE logoMediaNama logoTechCrunch logoThe Hacker News logo

7 Sources

SiliconANGLE logoMediaNama logoTechCrunch logoThe Hacker News logo

7 Sources

AI Image Generator's Exposed Database Reveals Disturbing

AI Image Generator's Exposed Database Reveals Disturbing Misuse and Ethical Concerns

A South Korean AI image generation company's exposed database reveals the creation of explicit and illegal content, raising serious concerns about AI misuse and the need for stricter regulations.

Wired logotheregister.com logo

2 Sources

Wired logotheregister.com logo

2 Sources

Microsoft Launches Tool to Combat Deepfake Porn in Bing

Microsoft Launches Tool to Combat Deepfake Porn in Bing Search Results

Microsoft introduces a new tool to help victims remove non-consensual intimate images, including AI-generated deepfakes, from Bing search results. This initiative aims to protect individuals from online exploitation and harassment.

TechCrunch logoBleeping Computer logo

2 Sources

TechCrunch logoBleeping Computer logo

2 Sources

Microsoft Unveils New AI Features to Enhance Trust,

Microsoft Unveils New AI Features to Enhance Trust, Security, and Privacy

Microsoft introduces innovative AI features aimed at addressing hallucinations, improving security, and enhancing privacy in AI systems. These advancements are set to revolutionize the trustworthiness and reliability of AI applications.

VentureBeat logoCRN logo

2 Sources

VentureBeat logoCRN logo

2 Sources

Microsoft's AI Red Team Reveals Critical Insights on

Microsoft's AI Red Team Reveals Critical Insights on Generative AI Security Challenges

Microsoft's AI Red Team, after probing over 100 generative AI products, highlights the amplification of existing security risks and the emergence of new challenges in AI systems. The team emphasizes the ongoing nature of AI security work and the crucial role of human expertise in addressing these evolving threats.

theregister.com logoSiliconANGLE logoTechRadar logoCRN logo

4 Sources

theregister.com logoSiliconANGLE logoTechRadar logoCRN logo

4 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved