AI-generated deepfakes irritated Microsoft so much that it's calling Congress to act

Microsoft wants Congress to enact a deepfake fraud statute to establish a specific legal framework.

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • Microsoft urges Congress to create legal frameworks for AI-generated deepfakes and fraud.
  • They recommend deepfake fraud statutes, clear labeling of synthetic content, and better safety controls.
  • Recent legislation allows suing for explicit deepfakes, and notable figures like Taylor Swift have fallen victim to such content.
Hacker, deepfake illustration

As artificial intelligence (AI) develops at a neck-breaking speed, and so does AI-generated deepfakes. Microsoft, being one of the biggest players in the AI race, is now calling out Congress to address such challenges.

Brad Smith, Microsoft’s Vice Chair & President, says in a recent Microsoft on the Issue blog post that the Redmond company is recommending several key actions. One of them is to enact a deepfake fraud statute to establish a specific legal framework that prosecutes AI-generated fraud and scams.

“While it’s imperative that the technology industry have a seat at the table, it must do so with humility and a bias towards action. Microsoft welcomes additional ideas from stakeholders across the digital ecosystem to address synthetic content harm,” he says.

Recent legislation allows victims of non-consensual explicit deepfakes to sue their creators, but Microsoft believes further measures, such as clear labeling of synthetic content—akin to Meta’s “Made with AI”— and improved safety controls for AI products, are essential for protecting the public.

Earlier this year, a new bill by Senators Durbin, Graham, and Hawley would let victims of sexually explicit deepfakes sue their creators and distributors. It classifies these deepfakes as “digital forgeries” and sets a 10-year statute of limitations.

The rise of AI-generated deepfakes is indeed horrifying, especially this year as almost the majority of countries are holding elections. Vulnerable populations like seniors and children are also some of the most prone victims: Internet Watch Foundation, a British anti-child porn organization, reported a surge in porn AI deepfakes in recent months.

Famed singer Taylor Swift was also a victim of AI-generated deepfakes earlier this year, more specifically on X (formerly Twitter). The platform temporarily banned searches for Taylor Swift after explicit deepfake images of her circulated. These images were created using Microsoft’s Designers tool and shared in a Telegram group.