Google AI puts father into child abuse investigation due to toddler’s online consultation nude photos

Reading time icon 3 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

A father named Mark, who took a photo of his child’s swelling groin to use it for an online medical consultation, was investigated after Google flagged the images as child sexual abuse material (CSAM). According to the report from The New York Times, though the investigation found that the father’s case “did not meet the elements of a crime and that no crime occurred,” it could have been worse, which could result in him losing his child’s custody. Even more, Google stayed firm with its decision to disable (and later delete) Mark’s account, though he appealed to the company with the result of the police report as proof of his innocence.

Technology remains flawed, but its mistakes shouldn’t put you in a critical condition that could put you in danger. That is not the case with Google’s machine learning classifiers and hash-matching technology that create “a ‘hash,’ or unique digital fingerprint, for an image or a video so it can be compared with hashes of known CSAM.” According to Google, once the tech and the “trained specialist teams” detect CSAM, it reports to National Center for Missing and Exploited Children (NCMEC), which will initiate an investigation with the help of law enforcement. Nonetheless, the technology is not 100% accurate, and the mistakes it could make could mean big consequences for anyone.

In an email sent to The Verge by Google, however, the company claims that its “team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.” Unfortunately, while the description fits Mark’s case, Google still flagged the father as a criminal, which led to an investigation and him losing his Google account and other important cloud files. On a side note, a spokesperson from Google explained to NYT that it only performs scanning when an “affirmative action” is performed, such as backing up photos to Google’s cloud.

According to the transparency report of Google, it has already reported a total of 621,583 cases of CSAM to the NCMEC CyberTipline, resulting in the law enforcement agencies being alerted to 4,260 possible issues of CSAM (and 270,000 accounts being disabled), including Mark’s case. However, after an investigation in February 2021 that proved there were no violations made, it makes us question Google AI’s ability to flag child abuse or exploitation. And with the fact that it could result in serious consequences like losing your cloud data and account and being investigated by authorities, being accused as a criminal by Google’s tool is a pure inconvenience. Worse, it could mean privacy issues, giving you concerns about the things you want to store on your phone or upload in the cloud, regardless if it is for medical or other non-offensive reasons.

User forum

0 messages