The rise of technology also paved the way for the spread of Child Sexual Abuse Material (CSAM) online. Australia is one of the countries trying to implement tougher laws to handle such issues, and now it is demanding tech giants like Microsoft to disclose their strategies in detecting and removing CSAMs. Apart from Microsoft, Reuters reported that big tech giants Meta and Apple also received a letter of demand from an Australian regulator.
The demand puts Microsoft and others in a serious situation since Australia is now implementing new laws that strengthen its actions against online issues and regulating technology firms. According to the e-Safety Commissioner, Microsoft and the others that received the letter have 28 days to disclose their measures in addressing CSAM on their different platforms. Failing to comply within the said period translates to the companies getting fined A$555,000 or $383,000 per day.
“This activity is no longer confined to hidden corners of the dark web but is prevalent on the mainstream platforms we and our children use every day,” commissioner Julie Inman Grant told Reuters in a statement. “As more companies move towards encrypted messaging services and deploy features like livestreaming, the fear is that this horrific material will spread unchecked on these platforms.”
While there is no clarity on the form or extent of details that should be disclosed, a spokesperson for Microsoft already said that it is planning to respond within 28 days. While there’s still no response from Apple, Meta’s spokesperson said the company continues “proactively engage with the eSafety Commissioner on these important issues.”
It was not divulged what the Australian regulator plans to do with the details that will be disclosed. But if it demands tighter CSAM measures in the future, it could mean a bigger challenge for big tech firms already struggling to produce effective anti-CSAM systems while preventing the violation of users’ privacy. It could also mean a bigger demand to design systems that can precisely identify CSAMs online. Just last year, an Android and Google account user was flagged as a criminal by Google’s AI after incorrectly labeling the image of his child’s medical-related nude photos as CSAM. Though the man’s name was cleared after an investigation, it put the efficiency of the CSAM system into question.