Back in November last year, Microsoft signed a $480 million contract to supply prototype augmented reality systems to the US Army for use on combat missions and in training. A group of Microsoft employees are now protesting against this deal claiming that Microsoft is providing weapon technology to the army. Similar protests have happened in the past inside Amazon and Google for the same reason. While Google decided to backtrack from the project, Amazon clearly mentioned that they will continue to provide technolgies to the army. I personally think these kind of arguments from employees are absurd because of the following reasons:
- Technology is an enabler, it can be used both for good and the bad. It is up to the army and the government behind it to decide how the technology should be used.
- Yes, HoloLens tech can be used in real war scenarios. Instead of focusing on enemies getting killed, why don’t you think about lives that will be saved when using HoloLens. Soldiers can use the same AR tech to avoid risks, escape from danger, etc.
- We have seen several incidents in the past where terrorists were caught using iPhone, Google Maps, WhatsApp, etc. You can’t expect Apple, Google and Facebook to stop developing their products since those products can be used for terrorist activities. The developers behind Google Maps can’t feel guilty about the fact that most of the terrorist attacks might be planned using their product.
- Consider Boeing, one of the largest manufacturer of commercial jetliners such as the 747, 777 and 787. Millions of people use their flights everyday for travelling across the world. Boeing also supplies air vehicles to the army for transporting army personnel. It would be absurd if Boeing employees protest that the jetliners manufactured by them should not be sold to the US army as it may be used to transport soldiers who in turn will kill enemies during war.
- If Microsoft is developing some sort of bio-weapon that would kill hundreds of people, then the arguments by these group of employees can be valid. You can’t directly compare technology platforms and weapons.
You can read the full letter to Microsoft from a group of employees below.
Dear Satya Nadella and Brad Smith,
We are a global coalition of Microsoft workers, and we refuse to create technology for warfare and oppression. We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government “increase lethality” using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used.
In November, Microsoft was awarded the $479 million Integrated Visual Augmentation System (IVAS) contract with the United States Department of the Army. The contract’s stated objective is to “rapidly develop, test, and manufacture a single platform that Soldiers can use to Fight, Rehearse, and Train that provides increased lethality, mobility, and situational awareness necessary to achieve overmatch against our current and future adversaries.”. Microsoft intends to apply its HoloLens augmented reality technology to this purpose. While the company has previously licensed tech to the U.S. Military, it has never crossed the line into weapons development. With this contract, it does. The application of HoloLens within the IVAS system is designed to help people kill. It will be deployed on the battlefield, and works by turning warfare into a simulated “video game,” further distancing soldiers from the grim stakes of war and the reality of bloodshed.
Intent to harm is not an acceptable use of our technology.
We demand that Microsoft:
1) Cancel the IVAS contract;
2) Cease developing any and all weapons technologies, and draft a public-facing acceptable use policy clarifying this commitment;
3) Appoint an independent, external ethics review board with the power to enforce and publicly validate compliance with its acceptable use policy.
Although a review process exists for ethics in AI, AETHER, it is opaque to Microsoft workers, and clearly not robust enough to prevent weapons development, as the IVAS contract demonstrates. Without such a policy, Microsoft fails to inform its engineers on the intent of the software they are building. Such a policy would also enable workers and the public to hold Microsoft accountable.
Brad Smith’s suggestion that employees concerned about working on unethical projects “would be allowed to move to other work within the company” ignores the problem that workers are not properly informed of the use of their work. There are many engineers who contributed to HoloLens before this contract even existed, believing it would be used to help architects and engineers build buildings and cars, to help teach people how to perform surgery or play the piano, to push the boundaries of gaming, and to connect with the Mars Rover (RIP). These engineers have now lost their ability to make decisions about what they work on, instead finding themselves implicated as war profiteers.
Microsoft’s guidelines on accessibility and security go above and beyond because we care about our customers. We ask for the same approach to a policy on ethics and acceptable use of our technology. Making our products accessible to all audiences has required us to be proactive and unwavering about inclusion. If we don’t make the same commitment to be ethical, we won’t be. We must design against abuse and the potential to cause violence and harm.
Microsoft’s mission is to empower every person and organization on the planet to do more. But implicit in that statement, we believe it is also Microsoft’s mission to empower every person and organization on the planet to do good. We also need to be mindful of who we’re empowering and what we’re empowering them to do. Extending this core mission to encompass warfare and disempower Microsoft employees, is disingenuous, as “every person” also means empowering us. As employees and shareholders we do not want to become war profiteers. To that end, we believe that Microsoft must stop in its activities to empower the U.S. Army’s ability to cause harm and violence.
Microsoft offered the following statement in response to the above letter.
“We gave this issue careful consideration and outlined our perspective in an October 2018 blog. We always appreciate feedback from employees and provide many avenues for their voices to be heard. In fact, we heard from many employees throughout the fall. As we said then, we’re committed to providing our technology to the U.S. Department of Defense, which includes the U.S. Army under this contract. As we’ve also said, we’ll remain engaged as an active corporate citizen in addressing the important ethical and public policy issues relating to AI and the military.”
Please provide your views on this issue in the comments section below.