Facial recognition systems are getting more ubiquitous. From the iPhone X’s Facial ID, to Windows Hello, to Facebook’s automated tagging, it is getting easier for companies — and anyone with access to their tech — to track you simply by your face. And unlike fingerprints, you don’t have 10 faces you can swap with for authentication reasons.
Microsoft’s Brad Smith has noted a number of ways such technology could be used in the real world in no security users. finding a lost child, stopping a terrorist before they can act, helping the blind identify friends simply with the camera as a second set of eyes.
The converse is true. A paedophile could just as easily track a missing young child, a kidnapper could stalk a target, a state could identify dissidents and use this technology to punish them by lowering their social credit.
Microsoft is aware that all this is something like 1984 and science fiction, but the time lag between science fiction and science fact has long since narrowed as technological advances have progressed, while human nature has yet to do so.
The firm hopes to start a conversation around the role of facial recognition technology in the way we interact with everyday society from day to day,
Microsoft has raised a few points that they believe the tech sector should take into consideration when developing this technology.
First, it’s incumbent upon those of us in the tech sector to continue the important work needed to reduce the risk of bias in facial recognition technology. No one benefits from the deployment of immature facial recognition technology that has greater error rates for women and people of color. That’s why our researchers and developers are working to accelerate progress in this area, and why this is one of the priorities for Microsoft’s Aether Committee, which provides advice on several AI ethics issues inside the company.
As we pursue this work, we recognize the importance of collaborating with the academic community and other companies, including in groups such as the Partnership for AI. And we appreciate the importance not only of creating data sets that reflect the diversity of the world, but also of ensuring that we have a diverse and well-trained workforce with the capabilities needed to be effective in reducing the risk of bias. This requires ongoing and urgent work by Microsoft and other tech companies to promote greater diversity and inclusion in our workforce and to invest in a broader and more diverse pipeline of talent for the future. We’re focused on making progress in these areas, but we recognize that we have much more work to do.
Second, and more broadly, we recognize the need to take a principled and transparent approach in the development and application of facial recognition technology. We are undertaking work to assess and develop additional principles to govern our facial recognition work. We’ve used a similar approach in other instances, including trust principles we adopted in 2015 for our cloud services, supported in part by transparency centers and other facilities around the world to enable the inspection of our source code and other data. Similarly, earlier this year we published an overall set of ethical principles we are using in the development of all our AI capabilities.
As we move forward, we’re committed to establishing a transparent set of principles for facial recognition technology that we will share with the public. In part this will build on our broader commitment to design our products and operate our services consistent with the UN’s Guiding Principles on Business and Human Rights. These were adopted in 2011 and have emerged as the global standard for ensuring corporate respect for human rights. We periodically conduct Human Rights Impact Assessments (HRIAs) of our products and services, and we’re currently pursuing this work with respect to our AI technologies.
Microsoft believes that government regulation from an elected body provides the right forum for this conversation, rather than for unchecked technological organisation with differing motives to regulate this. As we can see with Facebook’s privacy fiasco, whos to say that such a thing can’t happen with something as important as facial recognition data.
“As we think about the evolving range of technology uses, we think it’s important to acknowledge that the future is not simple.” Smith said, “A government agency that is doing something objectionable today may do something that is laudable tomorrow. We therefore need a principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment.”