Microsoft has revealed that four foreign developers and two U.S. developers unlawfully accessed generative AI services, modified them to create harmful content like celebrity deepfakes, and then sold access to the tools. The tech giant filed a lawsuit in December, which was unsealed in January, alleging that the developers created non-consensual intimate images and sexually explicit content using Microsoft’s Azure OpenAI services.
The developers, part of a global cybercrime network dubbed Storm-2139, are being tracked by Microsoft. The individuals involved are Arian Yadegarnia of Iran, Alan Krysiak of the UK, Ricky Yuen of Hong Kong, and Phát Phùng Tấn of Vietnam. Two U.S. developers based in Illinois and Florida are also involved, with Microsoft withholding their names due to ongoing criminal investigations.
Microsoft is working on criminal referrals to law enforcement agencies to tackle the issue. Storm-2139 gained access to AI services through exploited customer credentials scraped from public sources. Microsoft was able to disrupt the network by seizing a website connected to Storm-2139, leading to a deeper investigation.
As the news of the lawsuit spread, members of Storm-2139 targeted Microsoft lawyers with doxing attempts, revealing personal information and photographs. However, the attempts backfired as some suspected members of the group tried to blame others for the operation. The six individuals mentioned in the blog post are among ten “John Does” listed in the original complaint.
Microsoft aims to prevent the further circulation of harmful content by excluding synthetic imagery and prompts from its filings, prioritizing the privacy and protection of individuals targeted by the malicious AI tools.
Note: The image is for illustrative purposes only and is not the original image associated with the presented article. Due to copyright reasons, we are unable to use the original images. However, you can still enjoy the accurate and up-to-date content and information provided.