Microsoft's Brad Smith explains how Gab came to his notice

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Microsoft Brad Smith

Earlier this year, Microsoft broke into the news cycles for its content moderation policies when it decided that it would send a message to Gab.ai to warn them about their content.

In an interview with the Verge, Microsoft’s Bard Smith explained the behind the scenes decision making that went into the sending the message.

“Literally in that case, in all candor, somebody in our Azure support area in India had received an email from somebody who is in the consulting business who had heard from another company, expressing concerns about some content on Gab.ai,” Smith said in an Interview with the Verge., “While we were sleeping on the West Coast of the United States, an employee in India had sort of turned out an email that went to Gab that said, ‘We’ve spotted some content, and under our policy, you have to address it in 48 hours or you risk being cut off.’”

Microsoft’s executives did eventually review the decision later on, and it stood. The contentious content in question were anti-Semitic and violent posts. As Gab did take them down, they were eventually spared the banhammer.

Smith believes that this incident is a one-off, opting instead to set up a consistent set of rules which can be applied without executive interference, going on to say:

“Our goal is to develop a set of principles. And so at a high-level, we work to understand these issues, develop a principled approach, stress-test the principles somewhat, and then empower people to apply them,”

Source: The Verge

User forum

0 messages