Published on 13 April 2018

CSR

Artificial Intelligence : Google employees denounce collaboration on Pentagon research project

Google employees are launching a rebellion. And it could hurt the Internet giant’s reputation. The brand's employees are denouncing an artificial intelligence research program conducted with the US military. They fear that this work will be used to improve the effectiveness of combat drones.

Google is collaborating with the US military on an artificial intelligence program that could be used on combat drones.
@GeneralAtomics

It's bad news for Google. And this one was sparked internally. 3,100 of the Silicon Valley giant's 70,000 employees have just sent a letter to CEO Sundar Pichai, which has been published by the New York Times. They denounce the Maven project, after having been revealed last March. This project is a collaboration between Google and the Pentagon surrounding artificial intelligence.

This program aims to use artificial intelligence and machine learning to improve the interpretation of video imagery. It could be used to optimise drone missile targeting systems, according to the newspaper. "We believe that Google should not get involved in matters of war," say the authors of the letter. They then go on to mention the company slogan: "Don’t be evil".

Google’s reputation in danger

According to the authors of the letter, "This plan will irreparably damage Google's brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public's trust,” According to the letter, the fact that Microsoft or Amazon are also involved in the project does not reduce the risk for Google. “Recently, Googlers voiced concerns about Maven internally. Diane Greene responded, assuring them that the technology will not 'operate or fly drones' and 'will not be used to launch weapons.' While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it's delivered it could easily be used to assist in these tasks," According to the military, these technologies "will not be used to fly drones (or) launch missiles,” The U.S. Department of Defense has confirmed.

But these promises were not enough to convince the employees. "This technology is designed for the military and once it's issued it can easily be used for other tasks." They add: " We cannot outsource the moral responsibility of our technologies to third parties."

Strong links with the Pentagon

In fact, links between Google and the military are not new. In 2013, the group acquired Boston Dynamics, a military robots designer, before reselling the company in 2017. In addition, Google executives have previously joined Pentagon ranks. Eric Schmidt, former CEO and board member of Google's parent company, Alphabet, is now a US Department of Defense advisor, as is former Vice President Milo Medin.

During the summer of 2017, major digital developers have raised the alarm on the subject of the military’s use of artificial intelligence. In a letter to the United Nations, 116 entrepreneurs, including Elon Musk, founder of Tesla and SpaceX, as well as Demis Hassabis and Mustafa Suleyman, founders of DeepMind (a subsidiary of Google), called on the international organisation to vote on autonomous weapons (weapons not directed by a human being).

According to them, “lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

100 million euros for military AI in France

France, which recently launched a major artificial intelligence plan, has announced the creation of a Innovation Agency for Defense within the Ministry of the Armed Forces. According to the French Minister of the Armed Forces, Florence Parly, one of the priorities will be to acquire skills in the field of artificial intelligence, with a budget of 100 million euros to operate.

The goal is to develop the combat aircraft of the future. This ensures, however, that the humans will remain at the centre of the system. The goal of AI is to relieve soldiers of "the most tedious or dangerous tasks" while keeping the human "at the heart of the decision-making," said Parly.

Ludovic Dupin @LudovicDupin


© 2018 Novethic – All rights reserved


© 2019 Novethic - Tous droits réservés

‹‹ Retour à la liste des articles