Curated by THEOUTPOST
On Fri, 28 Feb, 8:03 AM UTC
7 Sources
[1]
Microsoft Is Suing People Who Did Bad Things With Its AI
Microsoft just modified a lawsuit to name four multinational developers who allegedly bypassed safety guardrails and abused Microsoft's AI tools to generate deepfaked celebrity porn and other harmful content. The tech giant announced the update in a blog post yesterday, saying that all four developers are members of Storm-2139, a cybercrime network. Being alleged cybercriminals, the named defendants go by nicknames that sound straight out of an early-2000s hacker flick: there's Arian Yadegarnia aka "Fiz" of Iran; Alan Krysiak aka "Drago" of the United Kingdom; Ricky Yuen aka "cg-dot" of Hong Kong; and Phát Phùng Tấn aka "Asakuri" of Vietnam. In the post, Microsoft breaks the individuals making up Storm-2139 into three tiers: "creators, providers, and users," who together comprise a dark marketplace hinging on the jailbreaking and modification of Microsoft's AI tools to create unlawful or destructive material. "Creators developed the illicit tools that enabled the abuse of AI-generated services," reads the post, adding that the "providers then modified and supplied these tools to end users often with varying tiers of service and payment." "Finally," it continues, "users then used these tools to generate violating synthetic content, often centered around celebrities and sexual imagery." The civil suit was initially filed in December, albeit with all specific defendants listed simply as "John Doe." Now, though, in light of new evidence revealed in Microsoft's investigation into Storm-2139, it's choosing to unmask some of the alleged bad actors embroiled in litigation -- others are still unnamed per ongoing investigations, according to the tech giant, though it says that at least two are American -- citing future deterrence as motivation for doing so. "We are pursuing this legal action now against identified defendants," Microsoft declared in the post, "to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology." It's a fascinating show of force by the behemoth that is Microsoft, which understandably doesn't want bad actors abusing its generative AI tools to create obviously terrible content, like nonconsensual fake porn of real people. After all, as far as deterrents go, finding yourself in the legal crosshairs of one of the world's wealthiest and most powerful organizations is pretty high up there. To that end, according to Microsoft, the legal pressure has already worked to divide Storm-2139. According to Microsoft, the "seizure" of the group's website and "subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another." That said, as Gizmodo notes, the decision by Microsoft to throw its heavy legal weight against alleged abusers of its tech also lands in a bit of a gray area in the ongoing debate over AI safety and how companies should seek to limit AI misuse. Some companies, like Meta, have chosen to make their frontier AI models open-source -- a more decentralized approach to AI development, though one that some experts argue could allow bad actors to quietly harness advanced AI technology out of view or oversight from the public (the AI industry currently pretty much regulates itself, so the concept of "oversight" in the AI industry should generally be taken with a grain of salt, though companies like Meta, Microsoft, and Google do still have to answer to the court of public opinion.) Microsoft, for its part, has embraced more of a mixed approach, building some models in public and keeping other models closed off from public view. Regardless of the tech giant's vast resources and stated commitments to safe and responsible AI, though, criminals have still allegedly found ways to crack through its guardrails and profit from ill use. And as Microsoft, like others, continues down its all-in-on-AI road, it can't exactly count on litigation alone to quell harmful exploitation of its AI tools -- especially in such a deregulated environment, where the law itself is still catching up to the complexities of AI harm and abuse. "While Microsoft and others have established systems designed to prevent misuse of generative AI," writes Axios' Ina Fried, "those protections only work when the technological and legal systems can effectively enforce them."
[2]
Microsoft Names Developers It Sued for Abusing Its AI Tools
Microsoft amended a lawsuit filed last year to name the four defendants whom it alleges misused its AI models to create celebrity deepfakes. Microsoft is trying to show its commitment to AI safety by amending a lawsuit filed last year to unmask the four developers it alleges evaded guardrails on its AI tools in order to generate celebrity deepfakes. The company filed the lawsuit back in December, and a court order allowing Microsoft to seize a website associated with the operation help it identify the individuals. The four developers are reportedly part of a global cybercrime network called Storm-2139: Arian Yadegarnia aka "Fiz" of Iran; Alan Krysiak aka "Drago" of the United Kingdom; Ricky Yuen aka "cg-dot" of Hong Kong and Phát Phùng Tấn aka "Asakuri" of Vietnam. Microsoft says there are others it has identified as involved in the scheme, but does not want to name them yet so as not to interfere with an ongoing investigation. The group, according to Microsoft, compromised accounts with access to its generative AI tools and managed to "jailbreak" them in order to create whatever types of images they desired. The group then sold access to others, who used it to create deepfake nudes of celebrities, among other abuses. After filing the lawsuit and seizing the group's website, Microsoft said the defendants went into panic mode. "The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another," it said on its blog. Celebrities, including Taylor Swift, have been frequent targets of deepfake pornography, which takes a real person's face and convincingly superimposes it on a nude body. Back in January 2024, Microsoft had to update its text-to-image models after fake images of Swift appeared across the web. Generative AI makes it incredibly easy to create the images with little technical abilityâ€"which has already led to an epidemic of high schools across the U.S. experiencing deepfake scandals. Recent stories from victims of deepfakes illustrate how creating the images is not a victimless act because it occurs digitally but translates into real-world harm by making targets feel anxious, afraid, and violated knowing someone out there is obsessed with them enough to do it. There has been an ongoing debate in the AI community regarding the topic of safety and whether the concerns are real or rather intended to help major players like OpenAI gain influence and sell their products by over-hyping the true power of generative artificial intelligence. One camp has argued that keeping AI models closed-source can help prevent the worst abuses by limiting users' ability to turn off safety controls; those in the open-source camp believe making models free to modify and improve upon is necessary to accelerate the sector, and it is possible to address abuse without hindering innovation. Either way, it all feels like somewhat of a distraction from the more immediate threat, which is that AI has been filling the web with inaccurate information and slop content. While a lot of fears about AI feel overblown and hypothetical in nature, and it seems unlikely that generative AI is anywhere near good enough to take on agency of its own, AI's misuse to create deepfakes is real. Legal means are one way in which those abuses can be addressed today. There have already been a slew of arrests across the U.S. of individuals who have used AI to generate deepfakes of minors, and the NO FAKES Act introduced in Congress last year would make it a crime to generate images based on someone's likeness. The United Kingdom already penalizes the distribution of deepfake porn, and soon it will also be a crime to even produce it. Australia recently criminalized the creation and sharing of non-consensual deepfakes.
[3]
Microsoft names cybercriminals behind AI deepfake network
Microsoft has named multiple threat actors part of a cybercrime gang accused of developing malicious tools capable of bypassing generative AI guardrails to generate celebrity deepfakes and other illicit content. An updated complaint identifies the individuals as Arian Yadegarnia from Iran (aka 'Fiz'), Alan Krysiak of the United Kingdom (aka 'Drago'), Ricky Yuen from Hong Kong, China (aka 'cg-dot'), and Phát Phùng Tấn of Vietnam (aka 'Asakuri'). As the company explained today, these threat actors are key members of a global cybercrime gang that it tracks as Storm-2139. "Members of Storm-2139 exploited exposed customer credentials scraped from public sources to unlawfully access accounts with certain generative AI services," said Steven Masada, Assistant General Counsel at Microsoft's Digital Crimes Unit. "They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content." Microsoft found during the investigation that the Storm-2139 crime network is organized into three categories: creators, providers, and users. Creators developed the tools that facilitated the misuse of AI-generated services, while providers adapted and distributed these illicit tools to end users who employed them to generate content violating Microsoft's Acceptable Use Policy and Code of Conduct, which was frequently focused on sexual imagery and celebrities. Today's update follows the company's lawsuit filed in the Eastern District of Virginia in December 2024 to collect more information on the cybercrime ring's operations. A temporary restraining order and preliminary injunction issued after the initial filing allowed Microsoft to disrupt the group's ability to use its services illegally by seizing a key website part of the criminal ring's infrastructure. Microsoft added that the seizure caused Storm-2139 members to turn on each other and speculate about who the "John Does" in the filings were. Microsoft's legal team also received multiple emails, including from several suspected members of Storm-2139 who blamed others in the operation for the malicious activity. "We are pursuing this legal action now against identified defendants to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology," Masada added today. "While we have identified two actors located in the United States -- specifically, in Illinois and Florida -- those identities remain undisclosed to avoid interfering with potential criminal investigations. Microsoft is preparing criminal referrals to United States and foreign law enforcement representatives. "
[4]
Microsoft names alleged 'Azure Abuse Enterprise' operators
Crew helped lowlifes generate X-rated celeb deepfakes using Redmond's OpenAI-powered cloud - claim Microsoft has named four of the ten people it is suing for allegedly snatching Azure cloud credentials and developing tools to bypass safety guardrails in its generative AI services - ultimately to generate deepfake smut videos of celebrities and others. Redmond filed a civil lawsuit in Virginia in December 2024 against the so-called "Azure Abuse Enterprise" crew. At the time, none of the accused were named. It is alleged the gang used API keys accidentally leaked from "multiple" Microsoft customers to improperly access the IT giant's Azure OpenAI service. The crew then allegedly resold access to this cloud service to other miscreants, and offered detailed instructions and tools to help their clients use Redmond's generative AI to produce the aforementioned harmful and sexually explicit material. We have identified two actors located in the United States...those identities remain undisclosed to avoid interfering with potential criminal investigations Upon filing the US federal-level lawsuit, Microsoft also obtained a court order allowing it to seize web domains used by the operation. The software giant said the seizures would help it "gather crucial evidence about the individuals behind these operations, to decipher how these services are monetized, and to disrupt additional technical infrastructure we find." That effort appears to have worked, as Microsoft on Thursday this week filed an amended legal complaint [PDF] that names four of the ten accused: Arian Yadegarnia aka "Fiz" of Iran; Alan Krysiak aka "Drago" of the United Kingdom; Ricky Yuen aka "cg-dot" of Hong Kong; and Phát Phùng Tấn aka "Asakuri" of Vietnam. Yadegarnia's identity, according to court filings [PDF], was at least partially disclosed in a January 11 4chan post when an anonymous user discussed the real name of "Fiz." While the Windows giant has only named four of the alleged crooks, it claims to have identified more of them, including two located in the United States. "Those identities remain undisclosed to avoid interfering with potential criminal investigations," wrote Steven Masada, assistant general counsel for Microsoft's Digital Crimes Unit. However, Microsoft's court filings state a suspect who lives in Illinois goes by the moniker "Khanon" and created software for running a reverse proxy service used to operate the Azure Abuse Enterprise. "Microsoft is preparing criminal referrals to United States and foreign law enforcement representatives," Masada added. The four named defendants are allegedly part of a gang that Microsoft otherwise tracks as Storm-2139. The organization is made up of three types of individuals: Creators, who develop illicit AI generation tools; providers, who modify and supply the tools to end users; and then the end users, who use the software to generate content that violated Microsoft's policies, much of it centered around celebrities and sexual images. The other yet-to-be-named criminals live in the US, UK, Austria, Turkey, and Russia. The lawsuit also alleges additional end users reside in Argentina, Paraguay, and Denmark, and "appear to have used the Azure Abuse Enterprises' technology and services to generate content that is not specifically in violation of Microsoft's terms of use." In other words: They knowingly gained unauthorized access to Microsoft's AI tools and used these services without paying for them, but didn't use them to create harmful content, it is claimed. Overall, as Microsoft put it in a statement: While monitoring 4chan and other communications platforms used by Storm-2139 helped Microsoft finger some of the suspected crooks, it also saw members of the notorious site post personal information about some of Microsoft's attorneys, it is claimed. That doxxing effort may have backfired, as Masada wrote that after Microsoft lawyers' details were published online, they "received a variety of emails, including several from suspected members of Storm-2139 attempting to cast blame on other members of the operation." The Windows giant is seeking court orders banning the misuse of its services, damages, and more. ®
[5]
Microsoft links AI celebrity deepfake scheme to hackers
Why it matters: While Microsoft and others have established systems designed to prevent misuse of generative AI, those protections only work when the technological and legal systems can effectively enforce them. Driving the news: Microsoft named four developers it says are part of the Storm-2139 global cybercrime network: Arian Yadegarnia aka "Fiz" of Iran; Alan Krysiak aka "Drago" of the United Kingdom; Ricky Yuen aka "cg-dot" of Hong Kong and Phát Phùng Tấn aka "Asakuri" of Vietnam. What they're saying: "We are pursuing this legal action now against identified defendants to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology," Microsoft said in a blog post on Thursday. The intrigue: Microsoft said the four named defendants aren't the only participants in the scheme it has identified. Between the lines: Microsoft said a court order allowing the company to seize a "website instrumental to the criminal operation," helped both disrupt the scheme and uncover its participants. Yes, but: Microsoft said it also led to the "doxxing" of its lawyers, including the posting of names, personal information, and in some instances, their photographs.
[6]
Microsoft Exposes LLMjacking Cybercriminals Behind Azure AI Abuse Scheme
Microsoft on Thursday unmasked four of the individuals that it said were behind an Azure Abuse Enterprise scheme that involves leveraging unauthorized access to generative artificial intelligence (GenAI) services in order to produce offensive and harmful content. The campaign, called LLMjacking, has targeted various AI offerings, including Microsoft's Azure OpenAI Service. The tech giant is tracking the cybercrime network as Storm-2139. The individuals named are - "Members of Storm-2139 exploited exposed customer credentials scraped from public sources to unlawfully access accounts with certain generative AI services," Steven Masada, assistant general counsel for Microsoft's Digital Crimes Unit (DCU), said. "They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content." The malicious activity is explicitly carried out with an intent to bypass the safety guardrails of generative AI systems, Redmond added. The amended complaint comes a little over a month after Microsoft said it's pursuing legal action against the threat actors for engaging in systematic API key theft from several customers, including several U.S. companies, and then monetizing that access to other actors. It also obtained a court order to seize a website ("aitism[.]net") that is believed to have been a crucial part of the group's criminal operation. Storm-2139 consists of three broad categories of people: Creators, who developed the illicit tools that enable the abuse of AI services; Providers, who modify and supply these tools to customers at various price points; and end users who utilize them to generate synthetic content that violate Microsoft's Acceptable Use Policy and Code of Conduct. Microsoft said it also identified two more actors located in the United States, who are based in the states of Illinois and Floria. Their identities have been withheld to avoid interfering with potential criminal investigations. The other unnamed co-conspirators, providers, and end users are listed below - "Going after malicious actors requires persistence and ongoing vigilance," Masada said. "By unmasking these individuals and shining a light on their malicious activities, Microsoft aims to set a precedent in the fight against AI technology misuse."
[7]
Microsoft Outs Hackers Behind Tools to Bypass Generative AI Guardrails
Microsoft Corp. said it has identified US and overseas-based criminal hackers who bypassed guardrails on generative artificial intelligence tools -- including the company's Azure OpenAI services -- to generate harmful content, including non-consensual intimate images of celebrities and other sexually explicit content. The hackers scraped customer logins from public sources and used them to access generative AI services, including Azure OpenAI, the Microsoft cloud product that lets customers use OpenAI's models, according to the company. The hackers then changed the capabilities of the AI products and sold access to other malicious groups, providing them with instructions on how to create harmful content. The hackers identified by Microsoft are based in Iran, the UK, Hong Kong and Vietnam. They are allegedly part of a global cybercrime network that Microsoft calls Storm-2139. Two other members are located in Florida and Illinois, but Microsoft said it isn't naming them to avoid derailing criminal investigations. The software maker said it's preparing criminal referrals to US and foreign law enforcement. The action comes as the increasing popularity of generative AI tools fosters concerns about their misuse to generate faked illicit images of public figures and regular individuals, as well to create child sexual abuse material. Companies like Microsoft and OpenAI ban such behavior and take technological steps to block it, but malicious groups can still try to gain unauthorized access.
Share
Share
Copy Link
Microsoft has identified four key members of a global cybercrime network who allegedly bypassed AI safety measures to create and distribute harmful content, including celebrity deepfakes.
In a significant development in the fight against AI misuse, Microsoft has amended a lawsuit to name four individuals allegedly involved in a global cybercrime network known as Storm-2139. The tech giant accuses these developers of bypassing safety guardrails on its AI tools to generate harmful content, including celebrity deepfakes 1.
The named defendants are:
Microsoft's investigation revealed that Storm-2139 operates with a three-tiered structure 2:
The network allegedly exploited exposed customer credentials to access Microsoft's generative AI services, then altered their capabilities to create and sell access for generating illicit content 3.
Microsoft's lawsuit, filed in December 2024 in the Eastern District of Virginia, initially listed the defendants as "John Does." The recent amendment naming specific individuals marks a significant escalation in the company's efforts to combat AI misuse 4.
The legal action has already had notable effects:
This case highlights the ongoing challenges in ensuring the responsible use of AI technologies. While companies like Microsoft implement safety measures, determined actors can still find ways to circumvent these protections 5.
The incident has reignited debates within the AI community about the best approaches to AI safety:
Microsoft has indicated that its investigations are ongoing, with at least two additional suspects located in the United States. The company is preparing criminal referrals to both U.S. and foreign law enforcement agencies, signaling a multi-pronged approach to combating AI abuse 3.
As the legal proceedings unfold, this case is likely to set important precedents for how tech companies and legal systems address the misuse of AI technologies in an increasingly AI-driven world.
Reference
[3]
[4]
Microsoft has filed a lawsuit against a group of cybercriminals who developed tools to bypass AI safety measures and generate harmful content using Azure OpenAI services.
7 Sources
7 Sources
A South Korean AI image generation company's exposed database reveals the creation of explicit and illegal content, raising serious concerns about AI misuse and the need for stricter regulations.
2 Sources
2 Sources
Microsoft introduces a new tool to help victims remove non-consensual intimate images, including AI-generated deepfakes, from Bing search results. This initiative aims to protect individuals from online exploitation and harassment.
2 Sources
2 Sources
Microsoft introduces innovative AI features aimed at addressing hallucinations, improving security, and enhancing privacy in AI systems. These advancements are set to revolutionize the trustworthiness and reliability of AI applications.
2 Sources
2 Sources
Microsoft's AI Red Team, after probing over 100 generative AI products, highlights the amplification of existing security risks and the emergence of new challenges in AI systems. The team emphasizes the ongoing nature of AI security work and the crucial role of human expertise in addressing these evolving threats.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved