Why artificial intelligence needs an ethical framework

Artificial Intelligence (AI) is not value neutral. Its use raises concerns regarding privacy and data protection, gender and cultural biases, increased digital exposure for populations at risk, and an escalation of misinformation and hate speech. It is also being used to support decision making in sensitive issues such as justice. It is therefore necessary to ensure that AI is framed in a way that aligns with the highest ethical values. Given AI’s global nature, such guidance must be global as well. This session looked at inspiring initiatives which are taking form, such as UNESCO’s elaboration of the first global standard-setting instrument on the ethics of artificial intelligence, or the work undertaken by the Global Partnership on Artificial Intelligence.

Author: Sciences Po student Auriane Mattera summarizes the debate session of the third edition of the Paris Peace Forum

Debate title: The values of artificial intelligence technology: Bringing ethics to the table

Date: 13 November 2020

Artificial intelligence raises more and more ethical concerns as its uses expand and extend into public health and safety missions. But bringing ethics into the race for AI is as pressing a need as challenging to achieve.

Privacy and data protection, gender and cultural biases, increased digital exposure for populations at risk, and an escalation of misinformation all highlight the imperative need for ethical frameworks and regulations in the field of artificial intelligence (AI). During a high-profile debate of the 2020 Paris Peace Forum, expert speakers agreed on the crucial value of “human-centered approaches to AI”, as emphasized by Borut Pahor, President of Slovenia, who stressed the vulnerability of our democracies in the face of algorithms. However, as he reminded us, it is “up to us how we use those technologies”.

A viewpoint shared by Gabriela Ramos, Assistant Director-General for the Social and Human Sciences of UNESCO, especially in our times of constant change. As she pointed out, “We need to take advantage of the fact that Covid-19 is providing us with these environments in which non-regulatory cooperation needs to be put together”. Indeed, the pandemic has paved the way for the elaboration of UNESCO’s ethical framework and assessment tool for AI. “Technologies are great”, she insisted, “but they need to be developed in a way that will enhance human rights and human dignity”.

A mission to which Stuart Russell, Professor of Computer Science at the University of California, Berkeley, is no stranger. One would indeed imagine that corporations would be uncooperative with ethical frameworks from which they could potentially stand to lose profit. However, Professor Russell stated that we need to “reject the idea that corporations accept or reject laws […] we get to make the laws, and we don’t need their permission to do so”. Digital impersonation, privacy, user algorithms, facial recognition, and misinformation are all issues for which we will not have market solutions, where regulation would be not only necessary but welcome: “It is also the case that some corporations want regulations with real teeth,” he added, “because when you don’t have rules if you don’t act unethically, you go out of business”.

Governments and companies alike thus stand to gain, not lose, from ethical AI frameworks and regulations. Bertrand Braunschweig, Coordinator of the French national AI research program at Inria, emphasized that “EU countries are motivated to complete recommendations in the framework of the Global Partnership on Artificial Intelligence (GPAI)”. On this matter, Stéphanie Shaelou, Professor of European Law and Reform and Head of the School of Law of the University of Central Lancashire in Cyprus, highlighted the importance of hybrid approaches comprising legal rules and principles. Not only are those approaches tools for governance, but they are also central to any reflection. We must indeed remind ourselves that “on the topic of AI, it is complicated to distinguish legal, ethical and social principles, and it is dangerous to do so”. Right of redress, risk assessment, compliance assessment, certification, and standardization must all include ethics-by-design approaches, where ethical frameworks are written into data collecting algorithms. Far from being futuristic, regulation efforts must be very realistic and enforced robustly, requiring coercive powers at the level of international organizations.

Such important issues are not debated enough. Inherently political, AI ethics compel us to define what we mean by “ultimate human benefit” or “self-awareness”. Much needed ethics-by-design approaches will require us to embark on a lifelong journey and dialogue with artificial intelligence. In this race, investing in lawmakers and the capacities of governments to undertake this work is key to deepening our understanding.

By Auriane Mattera

Watch the full debate

Subscribe to the newsletter