Curated by THEOUTPOST
On Sat, 11 Jan, 8:03 AM UTC
7 Sources
[1]
Microsoft sues cybercriminal operation that developed tools to bypass AI safety guardrails - SiliconANGLE
Microsoft sues cybercriminal operation that developed tools to bypass AI safety guardrails Microsoft Corp.'s Digital Crime Unit has taken legal action to disrupt a cybercriminal operation that has developed tools specifically designed to bypass the safety guardrails of generative artificial intelligence services. The complaint, filed in the Eastern District of Virginia in December, claims that the unnamed cybercriminals violate U.S. law and the Acceptable Use Policy and Code of Conduct for Microsoft services. The complaint alleges that "Does 1-10 Operating an Azure Abuse Network" breached laws, including the Computer Fraud and Abuse Act, the Digital Millenium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act, as well as trespass to chattels and tortious interference under Virginia state law. Microsoft alleges that the defendants used stolen customer credentials and custom software to bypass security measures and, in doing so, generated harmful content through Microsoft's platform. The defendants are also alleged to have used tools such as de3u and a reverse proxy service to manipulate Microsoft's generative AI systems. De3u is a client-side tool designed to facilitate the generation of AI-created images using DALL·E 3, an image-generating AI model developed by OpenAI that is also accessible through Microsoft. Having gained access to Microsoft AI services, the defendants are then alleged to have resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. "Every day, individuals leverage generative AI tools to enhance their creative expression and productivity," Steven Masada, assistant general counsel of Microsoft's Digital Crimes Unit, said in a blog post. "Unfortunately, and as we have seen with the emergence of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for malicious purposes." Exactly what the attackers were using the bypass of DALL-E 3 to create is not entirely clear - "harmful and illicit content" could range from AI-generated abuse material through to something Microsoft simply doesn't like. Also not clear is why the attackers would go to such effort to bypass safety guardrails on DALL-E 3 when there are better and even open source tools readily available on the market that produce superior images to DALL-E 3. But we do know how they went about it. "Unlike in other API attacks, where an attacker often targets business-critical data and running, in this situation, we have the attackers setting up a shadow AI," Katie Paxton-Fear, principal security researcher at application programming interface security company Traceable AI, told SiliconANGLE via email. "This worked by providing a DALL-E-like front end, which then sent user's prompts to OpenAI via Azure." "The attackers would then check if it had been censored to enable users to bypass the safety checks in the DALL-E front end on Open AIs website," Paxton-Fear added. "By using legitimate OpenAI credentials for other users and businesses stolen in other attacks, they were able to go unnoticed, moving their operations between many legitimate accounts." That Microsoft is also taking action in the case has also raised eyebrows, with cybersecurity expert Ophir Dror, cofounder of generative AI security company Lasso Security Inc., telling SiliconANGLE that "the fact that Microsoft is taking this case to court seems exceptional" and that "it's not always the case with such scenarios and might indicate a change in behavior from tech giants."
[2]
Microsoft Sues Hackers Over Misuse of Azure OpenAI Services
Microsoft has filed a lawsuit against unknown individuals for intentionally developing tools to bypass the safety guardrails of its generative AI services. In a complaint filed in a U.S District Court in Virginia, the tech company alleged that its Azure OpenAI service has been abused for "unlawful generation of harmful images". For context, this AI service is Microsoft's cloud solution through which customers can access various OpenAI models. Ten individuals under the pseudonym "DOE" have been named as defendants in the legal action. Microsoft claims that the first three formed a service to compromise the accounts of its AI service users and traffic the stolen customer data, while the remaining six were end users of such illegal technology. The issue was first discovered in July 2024 as a pattern of systematic application programming interface (API) key theft, wherein Microsoft API keys from multiple of its customers were stolen. However, the exact methods employed by the defendants remain unknown. API keys are unique strings of characters to authenticate users of Microsoft's Azure services. Further, the defendants created a "hacking-as-a-service" scheme using these stolen keys that could be accessed through domains like "retry.org/de3u" and "aitism.net". Later, using a "de3u" software and custom-built proxy software, they created HTTP requests mimicking authentic Azure OpenAI Service API calls and sent them to the computer handling this service. While doing so, the defendants' software also circumvented the technological controls deployed by Microsoft to prevent alteration and misuse of its AI service. To explain, they altered the target endpoint associated with the customer's API key, so that the key was delivered to the de3u user's desired endpoint address, as opposed to the endpoint address specified by the customer. In essence, the criminals illegally scraped exposed customer credentials from public websites, trafficked this data to authenticate information to bypass Microsoft's technological measures and in doing so, gained access to its software and computer systems. Through this unauthorised access, the defendants created harmful content, infringing on Microsoft's policies. These acts in question violate several U.S. laws, like the Computer Fraud & Abuse Act 1986, the Digital Millenium Copyright Act, etc. Under the former, the company claimed that the defendants purposely accessed the "protected computers" (providing the Azure OpenAI service) without authorisation, thereby causing damage and loss. Concerning the latter, Microsoft's Azure APIs, the software they interact with, and those implementing abuse and content filtering policies are subject to copyright. However, through the "maliciously configured" HTTP requests comprising stolen API keys, defendants evaded Microsoft's measures to control access to the Azure software. As per the company's blog post, the Court order has authorised it to seize a website "instrumental to the criminal operation" to enable Microsoft to gain insight into how such activities are monetized and how they sabotage additional technical infrastructure. Further, the company stated that it has since revoked access to cybercriminals and has amped up safety mitigations to prohibit the reoccurrence of such instances. Microsoft has consistently emphasised its commitment to combating abusive AI-generated content and customer data privacy protection norms for its AI solutions like Azure OpenAI and Copilot. However, the threat of foreign actors abusing AI tools for cybercriminal purposes has been a topic of concern for AI services. Previously, Microsoft in collaboration with OpenAI has blocked the attempts of various state-affiliated threat actors to exploit such services for phishing or malware purposes. However, unauthorised access to customer API keys could lead to potential data breaches, further exposing sensitive customer information to cybercriminals. Further, such access may also cause service disruptions eroding customer trust, causing financial harm to companies, and damaging their reputation.
[3]
Microsoft accuses group of developing tool to abuse its AI service in new lawsuit
Microsoft has taken legal action against a group the company claims intentionally developed and used tools to bypass the safety guardrails of its cloud AI products. According to a complaint filed by the company in December in the U.S. District Court for the Eastern District of Virginia, a group of unnamed 10 defendants allegedly used stolen customer credentials and custom-designed software to break into the Azure OpenAI Service, Microsoft's fully managed service powered by ChatGPT maker OpenAI's technologies. In the complaint, Microsoft accuses the defendants -- who it refers to only as "Does," a legal pseudonym -- of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal racketeering law by illicitly accessing and using Microsoft's software and servers for the purpose to "create offensive" and "harmful and illicit content." Microsoft did not provide specific details about the abusive content that was generated. The company is seeking injunctive and "other equitable" relief and damages. In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI Service credentials -- specifically API keys, the unique strings of characters used to authenticate an app or user -- were being used to generate content that violates the service's acceptable use policy. Subsequently, through an investigation, Microsoft discovered that the API keys had been stolen from paying customers, according to the complaint. "The precise manner in which Defendants obtained all of the API Keys used to carry out the misconduct described in this Complaint is unknown," Microsoft's complaint reads, "but it appears that Defendants have engaged in a pattern of systematic API Key theft that enabled them to steal Microsoft API Keys from multiple Microsoft customers." Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to U.S.-based customers to create a "hacking-as-a-service" scheme. Per the complaint, to pull off this scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft's systems. De3u allowed users to leverage stolen API keys to generate images using DALL-E, one of the OpenAI models available to Azure OpenAI Service customers, without having to write their own code, Microsoft alleges. De3u also attempted to prevent the Azure OpenAI Service from revising the prompts used to generate images, according to the complaint, which can happen, for instance, when a text prompt contains words that trigger Microsoft's content filtering. A repo containing de3u project code, hosted on GitHub -- a company that Microsoft owns -- is no longer accessible at press time. "These features, combined with Defendants' unlawful programmatic API access to the Azure OpenAI service, enabled Defendants to reverse engineer means of circumventing Microsoft's content and abuse measures," the complaint reads. "Defendants knowingly and intentionally accessed the Azure OpenAl Service protected computers without authorization, and as a result of such conduct caused damage and loss." In a blog post published Friday, Microsoft says that the court has authorized it to seize a website "instrumental" to the defendants' operation that will allow the company to gather evidence, decipher how the defendants' alleged services are monetized, and disrupt any additional technical infrastructure it finds. Microsoft also says that it has "put in place countermeasures," which the company didn't specify, and "added additional safety mitigations" to the Azure OpenAI Service targeting the activity it observed.
[4]
Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation
Microsoft has revealed that it's pursuing legal action against a "foreign-based threat-actor group" for operating a hacking-as-a-service infrastructure to intentionally get around the safety controls of its generative artificial intelligence (AI) services and produce offensive and harmful content. The tech giant's Digital Crimes Unit (DCU) said it has observed the threat actors "develop sophisticated software that exploited exposed customer credentials scraped from public websites," and "sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services." The adversaries then used these services, such as Azure OpenAI Service, and monetized the access by selling them to other malicious actors, providing them with detailed instructions as to how to use these custom tools to generate harmful content. Microsoft said it discovered the activity in July 2024. The Windows maker said it has since revoked the threat-actor group's access, implemented new countermeasures, and fortified its safeguards to prevent such activity from occurring in the future. It also said it obtained a court order to seize a website ("aitism[.]net") that was central to the group's criminal operation. The popularity of AI tools like OpenAI ChatGPT has also had the consequence of threat actors abusing them for malicious intents, ranging from producing prohibited content to malware development. Microsoft and OpenAI have repeatedly disclosed that nation-state groups from China, Iran, North Korea, and Russia are using their services for reconnaissance, translation, and disinformation campaigns. Court documents show that at least three unknown individuals are behind the operation, leveraging stolen Azure API keys and customer Entra ID authentication information to breach Microsoft systems and create harmful images using DALL-E in violation of its acceptable use policy. Seven other parties are believed to have used the services and tools provided by them for similar purposes. The manner in which the API keys are harvested is currently not known, but Microsoft said the defendants engaged in "systematic API key theft" from multiple customers, including several U.S. companies, some of which are located in Pennsylvania and New Jersey. "Using stolen Microsoft API Keys that belonged to U.S.-based Microsoft customers, defendants created a hacking-as-a-service scheme - accessible via infrastructure like the 'rentry.org/de3u' and 'aitism.net' domains - specifically designed to abuse Microsoft's Azure infrastructure and software," the company said in a filing. According to a now removed GitHub repository, de3u has been described as a "DALL-E 3 frontend with reverse proxy support." The GitHub account in question was created on November 8, 2023. It's said the threat actors took steps to "cover their tracks, including by attempting to delete certain Rentry.org pages, the GitHub repository for the de3u tool, and portions of the reverse proxy infrastructure" following the seizure of "aitism[.]net." Microsoft noted that the threat actors used de3u and a bespoke reverse proxy service, called the oai reverse proxy, to make Azure OpenAl Service API calls using the stolen API keys in order to unlawfully generate thousands of harmful images using text prompts. It's unclear what type of offensive imagery was created. The oai reverse proxy service running on a server is designed to funnel communications from de3u user computers through a Cloudflare tunnel into the Azure OpenAI Service, and transmit the responses back to the user device. "The de3u software allows users to issue Microsoft API calls to generate images using the DALL-E model through a simple user interface that leverages the Azure APIs to access the Azure OpenAI Service," Redmond explained. "Defendants' de3u application communicates with Azure computers using undocumented Microsoft network APIs to send requests designed to mimic legitimate Azure OpenAPI Service API requests. These requests are authenticated using stolen API keys and other authenticating information." It's worth pointing out that the use of proxy services to illegally access LLM services was highlighted by Sysdig in May 2024 in connection with an LLMjacking attack campaign targeting AI offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure, Mistral, and OpenAI using stolen cloud credentials and selling the access to other actors. "Defendants have conducted the affairs of the Azure Abuse Enterprise through a coordinated and continuous pattern of illegal activity in order to achieve their common unlawful purposes," Microsoft said. "Defendants' pattern of illegal activity is not limited to attacks on Microsoft. Evidence Microsoft has uncovered to date indicates that the Azure Abuse Enterprise has been targeting and victimizing other AI service providers."
[5]
Microsoft claims its servers were illegally accessed to make unsafe AI content
Microsoft has accused an unnamed collective of developing tools to intentionally sidestep the safety programming in its Azure OpenAI Service that powers the AI tool ChatGPT. In December 2024, the tech giant filed a complaint in the US District Court for the Eastern District of Virginia against 10 anonymous defendants, who it accuses of violating the Computer Fraud and Abuse Act, the Digital Millenium Copyright Act, plus federal racketeering law. Microsoft claims its servers were accessed to aid the creation of "offensive", "harmful and illicit content". Though it gave no further details as to the nature of that content, It was clearly enough for swift action; it had a Github repository pulled offline, and claimed in a blog post the court allowed them to seize a website related to the operation. In the complaint, Microsoft stated that it first discovered users abusing the Azure OpenAI Service API keys used to authenticate them in order to produce illicit content back in July 2024. It went on to discuss an internal investigation that discovered that the API keys in question had been stolen from legitimate customers. "The precise manner in which Defendants obtained all of the API Keys used to carry out the misconduct described in this Complaint is unknown, but it appears that Defendants have engaged in a pattern of systematic API Key theft that enabled them to steal Microsoft API Keys from multiple Microsoft customers," reads the complaint. Microsoft claims, with the ultimate goal of launching a hacking-as-a-service product, the defendants created de3u, a client-side tool, to steal these API keys, plus additional software to allow de3u to communicate with Microsoft servers. De3u also worked to circumvent the Azure OpenAI Services' inbuilt content filters and subsequent revision of user prompts, allowing DALL-E, for example, to generate images that OpenAI wouldn't normally permit. "These features, combined with Defendants' unlawful programmatic API access to the Azure OpenAI service, enabled Defendants to reverse engineer means of circumventing Microsoft's content and abuse measures," it wrote in the complaint.
[6]
Microsoft sues service for creating illicit content with its AI platform
Microsoft is accusing three individuals of running a "hacking-as-a-service" scheme that was designed to allow the creation of harmful and illicit content using the company's platform for AI-generated content. The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of harmful content through its generative AI services, said Steven Masada, the assistant general counsel for Microsoft's Digital Crimes Unit. They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use. Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn't know their identity. "By this action, Microsoft seeks to disrupt a sophisticated scheme carried out by cybercriminals who have developed tools specifically designed to bypass the safety guardrails of generative AI services provided by Microsoft and others," lawyers wrote in a complaint filed in federal court in the Eastern District of Virginia and unsealed Friday. The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site at "rentry[.]org/de3u. The service, which ran from last July to September when Microsoft took action to shut it down, included "detailed instructions on how to use these custom tools to generate harmful and illicit content." The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft's AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company's Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them. Microsoft attorneys included the following images, the first illustrating the network infrastructure and the second displaying the user interface provided to users of the defendants' service: Microsoft didn't say how the legitimate customer accounts were compromised but said hackers have been known to create tools in search code repositories for API keys developers, which are inadvertently included in the apps the developers create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored.
[7]
Microsoft Catches Hackers Bypassing Safeguards for AI Image Generator DALL-E
Microsoft has used a court order to seize an internet domain used to create "offensive and harmful" AI-generated images through the company's Azure OpenAI service. According to Microsoft's complaint, which was unsealed in a Virginia court on Friday, the domain's creators used stolen login credentials for Azure OpenAI, which gave them access to the AI image generator DALL-E. Microsoft describes the domain's creators as a "foreign-based threat-actor group," which used custom software to bypass the guardrails for DALL-E. "Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content," the company wrote in a blog post. Microsoft discovered the activity last July when the hackers accessed Azure OpenAI through API keys, including from company customers based in Pennsylvania and New Jersey. The group was fueling the AI image generation through a tool called "de3u," which was previously available on GitHub and the "rentry.org/de3u" domain before the software was taken down. It's unclear what type of offensive imagery was generated. However, the de3u tool could bypass Microsoft's AI image-generation safeguards by preventing Azure OpenAI from revising a user's text prompts if they contained certain keywords to trigger content filtering. In response, Redmond revoked the access and filed a lawsuit last month in the Eastern District of Virginia to let it seize the "atism.net" domain used to carry out the hacking scheme. After the seizure, Microsoft noticed the hackers "taking steps to cover their tracks, including by attempting to delete certain Rentry.org pages, the GitHub repository for the de3u tool, and portions of the reverse proxy infrastructure," the company said in a subsequent court document. Microsoft also spotted the suspected creators of the de3u tool discussing the crackdown on the 4chan forum. So, it's possible the group may strike again or target other AI image generators. In the meantime, Microsoft wrote in the blog post: "With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated." The company also says it's placed new countermeasures and safeguards to thwart further attempts at malicious use.
Share
Share
Copy Link
Microsoft has filed a lawsuit against a group of cybercriminals who developed tools to bypass AI safety measures and generate harmful content using Azure OpenAI services.
Microsoft's Digital Crime Unit has taken legal action against a group of cybercriminals who developed sophisticated tools to bypass safety measures in the company's Azure OpenAI Service. The lawsuit, filed in December 2024 in the U.S. District Court for the Eastern District of Virginia, alleges violations of multiple laws, including the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act 123.
The unnamed defendants, referred to as "Does 1-10" in the complaint, are accused of creating a "hacking-as-a-service" infrastructure that exploited Microsoft's AI services. The operation involved:
The cybercriminals used a combination of techniques to bypass Microsoft's safety guardrails:
Microsoft first detected the suspicious activity in July 2024, observing a pattern of API key abuse 23. In response, the company has:
This incident highlights the growing concerns surrounding AI safety and security:
The lawsuit comes amid increasing reports of AI tools being misused:
Reference
[1]
Microsoft has identified four key members of a global cybercrime network who allegedly bypassed AI safety measures to create and distribute harmful content, including celebrity deepfakes.
7 Sources
7 Sources
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
Microsoft's AI Red Team, after probing over 100 generative AI products, highlights the amplification of existing security risks and the emergence of new challenges in AI systems. The team emphasizes the ongoing nature of AI security work and the crucial role of human expertise in addressing these evolving threats.
4 Sources
4 Sources
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
15 Sources
15 Sources
Microsoft introduces innovative AI features aimed at addressing hallucinations, improving security, and enhancing privacy in AI systems. These advancements are set to revolutionize the trustworthiness and reliability of AI applications.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved