Instagram failed to act on 90% of reported abusive messages sent through direct messaging (DM) of high-profile women, research found. This shows how the platform systematically fails to protect women in spite of its claims that Instagram is acting on hate speech such as misogyny, nudity or sexual activity, and threats or violence.
The study of the Center for Countering Digital Hate (CCDH) focused on DM, where online abuse is under-studied and typically unregulated. Specifically, CCDH worked with five high-profile women for multiple case studies. These women have a total of 4.8 followers on Instagram.
Out of the 8,717 DMs analyzed, researchers found that 1 out of 15 DMs sent to the participants violated the rules on harassment and abuse. There were 125 recorded examples of image-based sexual abuse (IBSA). Moreover, 1 out of 7 voice notes sent to women is abusive. Further, the platform permits women to be contacted by strangers for voice calls.
The research also found that of the 10 abusive DMs reported using Instagram tools, the platform failed to act on 9 of them. It also failed to act on 90% of accounts that use DM to send violent threats. Instagram also failed to take action on any image-based sexual abuse within 48 hours.
According to the study, the systematic problems that the platform must fix include the need for users to acknowledge vanish mode messages to report them. Further, the study found that the platform’s “hidden words” feature does not effectively hide abuse, and downloading evidence of abusive messages for reporting could be challenging for the users.
The study reported that in spite of the existing safety measures, the platform systematically fails to enforce the necessary sanctions on those who violate the policies. CCDH asserted that abuse and harmful content are permitted to thrive in the platform due to its negligence and disregard for the online users, making it safer for abusers than the users.
Due to the absence of effective tools to protect women from harmful content, these women are forced to carry the burden of protecting themselves from such abuses. These actions taken by women include efforts to avoid provoking abusers with their content or minimizing visibility.
In fact, this is one of the reasons for the refusal of some women who were approached to participate in this study. Some women were hesitant to join due to worry that publicly speaking about online misogynist abuse would make them the target of more abuse. Meanwhile, others who use Instagram to promote their brand or commercial work were concerned that the platform would sanction them for speaking out and might end up deprioritizing their posts.
Based on its findings, CCDH recommends that the platform should fix its broken systems for reporting these abuses and shut routes that permit strangers to abuse women. The paper indicates that Instagram has moral imperative and resources to address the reported issues. It also pointed to the impending legal obligation as the United Kingdom is set to prohibit sending of explicit images without consent from the recipients.
Women participants who granted the researcher access to their DMs through data download or access to their accounts include actress Amber Heard, broadcaster Rachel Riley, activist Jamie Klingler, journalist Bryony Gordon, and Burnt Roti Magazine founder Sharan Dhaliwal.
Meta, who owns Instagram, however, expressed disagreement on many of these conclusions of CCDH. Cindy Southworth, the company’s head of women’s safety, said that they agree that harassment of women is not acceptable, which is why they do not permit gender-based hate or threats of sexual violence. In fact, the platform launched stronger protection for female public figures last year.