UK anti-child porn organization reports surge in porn AI deepfakes in recent months
The Internet Watch Foundation (IWF) has been around since almost three decades
2 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Key notes
- IWF says that porn AI deepfakes are produced at “a frightening rate” in recent months.
- The UK-based anti-child porn organization reveals its finding in a recently-published July 2024 report.
- Apple is also facing controversies for underreporting AI child sexual abuse material.
The Internet Watch Foundation (IWF), a UK-based anti-child porn organization, reveals a disturbing trend of AI-generated child sexual abuse material (AI CSAM), or the so-called porn AI deepfakes.
The July 2024 report, posted on Monday, says that such content is being produced at “a frightening rate.” Between March and April 2024, researchers discovered nine new deepfake videos of child abuse on a dark web forum, which were not present in an earlier investigation that was dated October 2023.
Besides, the study also found over 12,000 new AI-generated images posted in a single month, with more than 3,500 deemed criminal and depicting severe abuse. This technology, particularly LoRA models, allows offenders to create highly realistic images of both fake and real victims.
“Unfortunately, UK legislation is falling behind advances in AI tech. While AI CSAM is illegal, and the IWF can take steps to have it removed, the same is not true for AI models fine-tuned on images of child sexual abuse. The tool for creating AI images of Olivia remains legal in the UK,” the report reads.
The previous October 2023 report found over 20,200 AI-generated child sexual abuse images posted to a dark web forum in one month. Of these, over 11,100 were assessed, with more than 2,500 classified as criminal pseudophotographs and roughly 400 as prohibited images.
The UK’s IFW has been operating since almost three decades ago. In September 2022, IFW & Pornhub launched the “reThink” chatbot to deter users searching for child sexual abuse material, engaging over 173,000 users in its first month and directing them to support services.
The report itself came amid Apple’s recent controversy of underreporting CSAM on their platforms, compared to other tech giants like Meta and Google.
The UKโs National Society for the Prevention of Cruelty to Children (NSPCC) says that Apple’s iCloud, iMessage, and Facetime were involved in more cases of CSAM in England and Wales than what the tech giant has reported globally. Apple had plans to incorporate an AI to scan for such content on iCloud, but due to data-collecting privacy concerns, the plan has been abandoned.
User forum
0 messages