Amazon and IBM recently jumped on the Black Lives Matter bandwagon, with Amazon saying they were putting a 1-year moratorium on sales of Facial Recognition technology to law enforcement services in USA, saying they were awaiting federal regulations to ensure law-enforcement agencies did not abuse the fallible technology.

Today Microsoft made a similar announcement, with Microsoft president Brad Smith saying:

“We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights that will govern this technology.”

On top of federal regulation, Microsoft said they will do their own review which goes “even beyond what we already have” to ensure wherever the technology is deployed it is used appropriately.

“The bottom line for us is to protect the human rights of people as this technology is deployed,” Smith said.

For some, it may look like Microsoft is following Amazon’s lead, but unlike Amazon, which has been selling its Amazon Rekognition software to law enforcement for years, Microsoft has always barred such sales.

In 2019 Microsoft received a request from a California law enforcement agency’s to install facial recognition technology in officers’ cars and body cameras. Since Microsoft’s face recognition tech may lead to more cases of mistaken identity with women and minorities, many innocent people will get affected if this tech is deployed broadly. Taking an ethical stand, Microsoft has decided not to sell its technology.

In the same year Amazon shareholders defeated a motion to halt Amazon’s sale of its facial recognition technology to US police forces. Amazon itself had tried to block the votes but was told by the Securities and Exchange Commission that it did not have the right to do so.

Microsoft has always been quite vocal about its views on the need for government regulation and responsible industry measures to address advancing facial recognition technology. Microsoft also revealed the principles that will guide them in how they develop and deploy facial recognition technology.

These included:

  1. Fairness. We will work to develop and deploy facial recognition technology in a manner that strives to treat all people fairly.
  2. Transparency. We will document and clearly communicate the capabilities and limitations of facial recognition technology.
  3. Accountability. We will encourage and help our customers to deploy facial recognition technology in a manner that ensures an appropriate level of human control for uses that may affect people in consequential ways.
  4. Non-discrimination. We will prohibit in our terms of service the use of facial recognition technology to engage in unlawful discrimination.
  5. Notice and consent. We will encourage private sector customers to provide notice and secure consent for the deployment of facial recognition technology.
  6. Lawful surveillance. We will advocate for safeguards for people’s democratic freedoms in law enforcement surveillance scenarios and will not deploy facial recognition technology in scenarios that we believe will put these freedoms at risk.

Despite Microsoft’s efforts, the company appears not to have received much credit for their opposition, with Joy Buolamwini, a MIT Media Lab researcher who wrote on Amazon’s Rekognition software saying: “Microsoft also needs to take a stand.”

Fortunately for all Microsoft has made their own decision well before the current crisis.

Comments