It’s not easy to add ethics to the Artificial Intelligence (AI) at Google. The tech giant has often been alerted by its employees about potentially bad uses of the technology. It is for this reason that Kent Walker, Vice President for Global Affairs at Google, announced the creation of an external advisory board on artificial intelligence ethics at the end of March.
The Advanced Technology External Advisory Council (ATEAC) brings together eight specialists from all horizons."The potential harms of AI are not evenly distributed, and follow historical patterns of discrimination and exclusion,” says the group, which assesses hiring issues, over-surveillance and smart weapons.
But the ATEAC has imploded just one week after its creation. More than 800 Google employees have signed a petition to demand the departure of the politician Kay Cole James, known for his anti-gay and transgender positions. Another member, Dyan Gibbens, president of the drone start-up Trumbull Unmanned, has been criticized for his links with the military.
Facebook newly engaged
This is a huge blow for the "nice giant" of Silicon Valley who was planning to turn around its image from the bad buzz received this summer. At that time, 4,000 employees demanded an end to the Maven project, which was to use AI in optimising tracking systems for American military drones. Google reluctantly gave in to their demands.
Despite Google’s ethics board failure, the situation has not prevented others from highlighting the global severity of the problem. The use of artificial intelligence has become a question of corporate social responsibility, as digital giant Facebook has realised. The company, largely controversial for its data management practices, announced at the end of January and institute on artificial intelligence ethics.
Based in Munich, in cooperation with the city's Technical University, the centre will receive $7.5 million in funding from Facebook, but will not host any Facebook employees. The goal is to conduct "independent and scientific research to provide knowledge and advice to society, industry, legislators and policy makers in the private and public sectors," the company says.
An IPCC for artificial intelligence
The French Thales group, a giant in the electrical, transportation and defence sectors is also tackling AI ethics. This is an even more sensitive move for a company specialised in defence activities. It has adopted an "Ethics and Digital Transformation" charter which focuses on AI. "At Thales, we have drawn red lines, and killer robots are one, and Thales will not go there," said CEO Patrice Caine. Beyond responsibility, Thales believes that this line of conduct is essential to attract top specialists in the field.
In any case, if companies do not begin to take on these issues, supranational bodies will do so. At the beginning of March, UNESCO launched a working group to draw up a common framework on AI ethics, to be reflected in the legislation of signatory countries. Audrey Azoulay, Director-General of UNESCO explained, "It is certainly premature to want to regulate [AI] globally, but it is high time to define a foundation of ethical principles that would frame this disruption."
Last December, France and Canada jointly launched the International Panel on Artificial Intelligence (IPAI). Emmanuel Macron wants to reproduce the IPCC model, which is the expert group on climate. " We invite researchers, businesses, international organisations, and countries that share our values to join us. Together, we will ensure that artificial intelligence serves the interests of humankind," says the group.
Ludovic Dupin, @LudovicDupin