Microsoft has taken a decisive stand against a global network of AI hackers, Storm-2139, producing harmful and abusive images of celebrities, women, and people of color using stolen credentials to bypass AI safeguards. This effort marks a significant milestone in the company’s ongoing commitment to digital safety and responsible AI use.
The issue came to light in July when Phillip Misner, head of Microsoft’s AI Incident Detection and Response team, discovered stolen access codes for an AI image generator being used to create sexualized images of celebrities. The problem quickly escalated as more stolen API keys surfaced on an anonymous message board notorious for hateful content, prompting Microsoft to launch a company-wide security response and file a legal case-the first of its kind-to stop the creation of harmful AI-generated content.
Microsoft faces off against Storm-2139

Court documents reveal that the network, named Storm-2139, involved six individuals who developed tools to hack into Azure OpenAI Service and other AI platforms in a “hacking-as-a-service” scheme. Four defendants, located in Iran, England, Hong Kong, and Vietnam, were named in a civil complaint filed in the U.S. District Court for the Eastern District of Virginia. An additional ten individuals allegedly used these tools to circumvent AI safeguards and produce images violating the company’s terms of use.
Richard Boscovich, assistant general counsel for Microsoft’s Digital Crimes Unit (DCU), emphasized the company’s zero-tolerance policy on AI abuse: “We are taking down their operation and serving notice that if anyone abuses our tools, we will go after you.”
The company’s fight against harmful AI content is part of a broader digital safety mission, which includes responding to cyberthreats, disrupting criminal activity, and building secure AI systems. Courtney Gregoire, vice president and chief digital safety officer at Microsoft, highlighted the disproportionate impact of image abuse on women and girls, noting that AI has dramatically increased the scale of this problem. The company’s multi-layered approach involves listening to victims, collaborating with lawmakers and advocates, and implementing technical safeguards to mitigate harm.
Following the initial complaint in December, Microsoft seized a website used by the network and blocked its activities. The lawsuit led to internal conflicts among network members, who began sharing information and anonymous tips that helped investigators identify defendants and gather evidence. This public legal strategy aims to deter others from abusing AI technologies.
Investigators found that the defendants created software to illicitly access image-generating AI models and used a reverse proxy service to hide their activities while saving images on a server in Virginia. The stolen credentials belonged to Azure customers who had inadvertently exposed them on public platforms. Users of these tools deliberately bypassed the tech giant’s content safety filters by manipulating prompts-substituting celebrity names with physical descriptions or using technical notations to trick AI models into ignoring safeguards.

In response, Microsoft enhanced its AI safeguards and helped affected customers improve their security. A key tool in the investigation was Content Credentials, provenance metadata attached to AI-generated images to provide transparency and combat deepfakes. The tech giant is a leader in this industry standard, which helps trace the origin of images and prevent misuse.
Sarah Bird, Microsoft’s chief product officer for Responsible AI, stressed the ongoing innovation to build strong guardrails and security measures ensuring AI technologies remain safe, secure, and reliable.
Microsoft’s commitment to combating image-based sexual abuse predates generative AI, with efforts dating back to 2015 to remove non-consensual intimate images from its platforms and Bing search results. The company has also donated its PhotoDNA technology to help victims remove abusive images online while protecting their privacy.
Gregoire underscored the devastating impact of such images on victims’ health, well-being, and social and economic opportunities. She called for systemic change involving technical safeguards, stronger laws, and education to address sexualized image bullying, emphasizing the company’s human-first approach to AI safety.
“We’re remembering the human at the center of technology,” Gregoire said. “We’re taking a human-first approach to mitigate harm and ensure we can use AI for good.”
Discover more from Microsoft News Now
Subscribe to get the latest posts sent to your email.