Instagram won't allow you to sneakily post your AI-generated images as your own, labels announced

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • Meta to label AI-generated posts on Facebook and Instagram, aiming to combat misinformation.
  • Collaboration with tech giants to establish industry standards for identifying AI-generated content.
  • Initial focus on AI-generated images, with expansion to audio/video and un-marked content planned.

Social media giant Meta announced plans to label posts created using artificial intelligence tools on Facebook, Instagram, and Threads. This initiative addresses concerns about the spread of misinformation, particularly during upcoming election cycles.

Meta is working with various tech companies, including Google, OpenAI, Microsoft, and Adobe, to establish industrywide standards for identifying AI-generated content. This involves incorporating invisible watermarks and metadata into images created using these tools, enabling Meta to detect them even if shared across different platforms.

The labeling will target AI-generated images from external sources in the initial phase. Detection of audio and video content using similar methods is still under development. Additionally, posts lacking standardized markers might not be identified initially. Meta is actively developing solutions to address these limitations.

This initiative gains significance as several countries, including the US, India, South Africa, and Indonesia, gear up for crucial elections. The growing prevalence of deepfakes has posed a significant challenge for voters and candidates, and Meta’s approach is intended to mitigate this risk.

Meta’s decision aligns with recent recommendations from its Oversight Board, which called for clearer labeling of AI-generated content rather than outright removal. At the same time, it acknowledges the current system’s limitations.

Nick Clegg, Meta’s president of global affairs, said to Bloomberg:

We’ll need to have a society-wide, or certainly industrywide, debate about the other end of the telescope, which is how do you flag for users the veracity or authenticity of non-synthetic content.

Labeling AI-generated content represents a significant step forward, but future challenges loom. As the online landscape becomes increasingly saturated with AI-generated material, Clegg also anticipates the need to explore labeling legitimate content. This opens up important discussions about online authenticity and trust in the digital age.

More here.

User forum

1 messages