Microsoft fixes loopholes that created Taylor Swift's deep fakes

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

In response to reports of individuals using its AI text-to-image generation tool, Designer, to create nonconsensual sexual images of celebrities, Microsoft has implemented additional safeguards to prevent misuse. The move follows revelations that AI-generated nude images of Taylor Swift circulated on Twitter, where Taylor Swift wasn’t searchable later on, originating from channels on 4chan and Telegram, where users exploited Designer.

A Microsoft spokesperson stated on Friday,

We are investigating these reports and are taking appropriate action to address them. Our Code of Conduct prohibits the use of our tools for the creation of adult or nonconsensual intimate content.

The company emphasized its commitment to responsible AI principles and mentioned ongoing efforts to develop content filtering, operational monitoring, and abuse detection systems.

While an investigation could not definitively link the AI-generated Swift images to Designer, Microsoft CEO Satya Nadella acknowledged the responsibility to enhance “guardrails” on AI tools to prevent generating harmful content.

In an interview with NBC News, Nadella expressed optimism about the potential for global societal norms to converge, calling for collaboration between law enforcement, tech platforms, and regulations to govern AI usage effectively.

404 Media reported that Microsoft had strengthened text filtering prompts in response to misuse. Users had previously exploited loopholes by misspelling celebrity names and providing descriptions that, while not explicit, resulted in sexually suggestive images. According to 404 Media, these loopholes no longer work after Microsoft’s recent adjustments.

The Telegram channel, where the AI-generated images surfaced, is reportedly still active, sharing explicit content generated using other AI tools. The channel poses a challenge, as the platform has not removed it, and little accountability is enforced for harmful content.

More about the topics: deepfakes