On Wednesday, at the United Nations Convention on Certain Conventional Weapons in Geneva, a panel of government experts debated policy options regarding lethal autonomous weapons.
Dutch NGO Pax created a report that surveyed major players from the sector on their view of lethal autonomous weapons. They categorised companies based on 3 criteria: whether they were developing technology that’s potentially relevant to deadly AI, working on related military products, and if they had committed to abstaining from contributing in the future.
By these criteria, Microsoft scores rather highly in the birthplace of Skynet rankings. Microsoft has invested extensively in developing artificial intelligence products, has very close relationships with the US military, and Satya Nadella has committed to providing the military with their very best technology. While Microsoft has fallen short of explicitly developing AI for military purposes, we do know that they have developed a version of the HoloLens for the military that is specifically designed to increase the lethality of soldiers in the field.
For Microsoft, the US military is likely to be a key growth sector.
While it may not have been awarded yet, Microsoft is likely to win the $10 billion Department of Defense JEDI contract.
On Wednesday, Stuart Russell -computer science professor at the University of California, Berkley- told AFP:
Anything that’s currently a weapon, people are working on autonomous versions, whether it’s tanks, fighter aircraft, or submarines.
Autonomous weapons will inevitably become scalable weapons of mass destruction, because if the human is not in the loop, then a single person can launch a million weapons or a hundred million weapons.
The fact is that autonomous weapons are going to be developed by corporations, and in terms of a campaign to prevent autonomous weapons from becoming widespread, they can play a very big role.
Microsoft isn’t the only company considered as “high risk”- over 50 companies in over 12 different countries were considered; and among the 21 who came out as “high risk” were Amazon, Microsoft, Intel and Plantir. The $800 million contract to develop an AI system “that can help soldiers analyse a combat zone in real-time” put Plantir in the top category.
Some AI-driven autonomous weapons are already active- such as the Harpy drone in Israel. New categories of autonomous weapons could also potentially be created, and their use of facial recognition technology could create further ethical issues.
With that type of weapon, you could send a million of them in a container or cargo aircraft — so they have destructive capacity of a nuclear bomb but leave all the buildings behind.
(They could) wipe out one ethnic group or one gender, or using social media information you could wipe out all people with a political view.
Russell strongly believes that an international ban should be put in place on lethal AI.
Machines that can decide to kill humans shall not be developed, deployed, or used.”
Google was one of the seven companies found to be engaging in “best practice”. As well as dropping out of the race for the JEDI cloud contract; the company declined the renewal of the Pentagon contract “Project Maven”, which used machine learning to distinguish people and objects in drone videos.
Maybe it’s time Microsoft sign up for the “don’t be evil” mantra also.