Microsoft sues service for creating illicit content with its AI platform
Ars Technica
Unknown
March 30, 2025
0.2
Summary
Service used undocumented APIs and other tricks to bypass safety guardrails.
Microsoft is accusing three individuals of running a "hacking-as-a-service" scheme that was designed to allow the creation of harmful and illicit content using the company’s platform for AI-generated content.
The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of harmful content through its generative AI services, said Steven Masada, the assistant general counsel for Microsoft’s Digital Crimes Unit. They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use.
A sophisticated scheme
Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn’t know their identity.
“By this action, Microsoft seeks to disrupt a sophisticated scheme carried out by cybercriminals who have developed tools specifically designed to bypass the safety guardrails of generative AI services provided by Microsoft and others,” lawyers wrote in a complaint filed in federal court in the Eastern District of Virginia and unsealed Friday.
The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site at “rentry[.]org/de3u. The service, which ran from last July to September when Microsoft took action to shut it down, included “detailed instructions on how to use these custom tools to generate harmful and illicit content.”
The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft’s AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company’s Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them.
Microsoft attorneys included the following images, the first illustrating the network infrastructure and the second displaying the user interface provided to users of the defendants' service:
Microsoft didn’t say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored.