Before We Get to Governance: Building Trust in Artificial Intelligence

Artificial intelligence is about to impact society worldwide in countless ways. Are we ready for a discussion about global governance of AI? Can we, before we first address the public anxiety?

Debate name: Bits, Bytes and Governance Challenges: The Future of Artificial Intelligence

Date: 12 November 2018

This topic emerged when the Paris Peace Forum gathered a group of diverse practitioners at their inaugural event in late 2018. Panelists from across sectors spoke of the need to improve awareness and address sensitivities head on. If we can’t defuse this distrust, we will be missing a chance to look at AI as a positive force. Understanding people’s fears and engaging them is an essential part of this process.

What’s the Problem

Artificial intelligence is on the cusp of pervading our society in countless ways. A geopolitical race for dominance is already underway, and talk of global rules of the road is starting to emerge. But there is a sense of growing public nervousness as awareness spreads about how AI will impact their daily lives.

These issues are moving fast. Governments are facing pressing decisions on practical matters like where and how to integrate AI in the delivery of public services. They also need to think long term about what type of AI standards and practices they want to put in place for the years to come. But the transition is challenging. With AI now beginning to touch citizen’s daily lives more directly, there is a sense of urgency around issues of fairness, transparency, and accountability.

If the goal is to use AI to advance social good and unite an increasingly fractured society, it is imperative to deal with the public trust deficits around AI. There is also a risk that if leaders move forward on formal, intergovernmental decisions about governing AI at the global level without engaging and addressing this public anxiety that we could see a backlash much like those of recent years over economic globalization. This is a topic ripe for cross-sector and cross-border collaboration. How do we make that happen?

Tabitha Goldstaub AI 2018 Paris Peace Forum

“If governments start to use AI, we have to upscale the general public to be able to understand it, trust it, and use it. We are in a really interesting moment. We [need to] be able to show that we are trustworthy, and that we deserve that trust, in order for citizens to want to use these services in their lives.”

– Tabitha Goldstaub, British Government Council on AI

At the End of the Tunnel

Right now, the overall public perception of artificial intelligence is still fearful and generally negative, a sense of losing control. Look to the way it is conveyed in pop culture and films. But he, and others on the panel, such as UAE Minister Omar al Olama and Ross Lajeunesse Google, emphasized the phenomenal potential to people’s individual lives and society as a whole. A concerted effort is needed to change the narrative.

Tabitha Golstaub from the British Council on AI took this point further, arguing that it is not just a narrative; it is also a matter of refocusing priorities. We should be thinking in terms of identifying societal needs first and then looking at how AI tools can serve those needs. The needs should drive the science rather than vice versa.

Her approach was based on the conclusion that: “Right now the end goal is not in favor of the people. [And they know it.] If we solve that, we can solve a lot of the problems around trust.” This type of approach not only helps to advance good for society, but it can actually help to diffuse distrust along the way. It essentially makes citizens the centerpiece of the transition ahead. And it’s not just about how to provide services, or address potential job losses, or even just adapt to new mechanisms of daily life. Fundamentally, “looking at AI holistically from the start” as Minister al Olama captured it, can help to open up possibilities for the coming innovations to address some of the most challenging problems for society — locally, nationally, and globally.

AI debate Paris Peace Forum 2018
Bits, Bytes and Governance Challenges: The Future of Artificial Intelligence session at the 2018 Paris Peace Forum

Who’s on the Hot Seat: Governments

Governments cannot afford to be behind the curve on AI and just muddling through in response mode. They know they have no choice but to address these issues. Understanding popular fears and engaging them must be a priority if they are going to succeed in building better community trust in artificial intelligence.

Panelists flagged that is not only a matter of working through the implementation of artificial intelligence into their services, or the decisions ahead on regulation. But it’s about how governments fundamentally take on this challenge of transition, adaption, and education.

Some concrete ideas were proposed in the course of the roundtable discussion. An interesting initiative already underway was highlighted when Rashida Richardson of the AI Now Institute spoke about their work with Algorithmic Indicator Assessments. The project studies problematic use of automated decision systems by governments, collecting data and looping the analysis back to the providers. It not only provides governments with a resource to improve efficacy. But it also helps them to understand deeper sensitivities — the “why” of AI skepticism — by asking those directly affected by the services what their needs and concerns are. As a byproduct, it also inspires greater trust by directly engaging those who are, or would be, directly affected by AI services.

“It’s not simply that governments don’t have enough technical knowledge about AI technologies, sometimes the government isn’t best informed about the issues [from those impacted by the services]…There is a deficiency on both ends.”

Rashida Richardson, AI Now Institute

Another idea emerged in the discussion. Carlos Modenas, currently the European Commissioner for Research, Science, and Innovation, and previously a government official in Portugal pointed out that new technologies could be used to improve officials’ own internal decision-making processes. He argued that governments still don’t understand how to use data for their own internal benefit.

“Governments are ten steps behind. The decisions we take today in most governments are based on gut feelings or just talking to people…Decisions even with a little bit of data would be better.”

Who’s on the Hot Seat: Technology Companies

It’s not only up to governments to guide the population through this transition. The big technology companies know that if they want to arrest the spiraling downward of confidence, then they need to do much more to put new initiatives in place, and fast. This was the sense from representatives of the sector present at the Forum. They recognized that the sector can help in combating the public impression of arbitrariness with AI by better explaining how algorithms make decisions (the factors, choices, and probabilities) and demystifying the process. It would also be useful to raise awareness that machine learning models are only as effective as the data they are trained on. If the underlying data is biased, structural inequalities and unfair biases become repeated and amplified through the AI application. “It needs to be explained in layman’s terms,” chimed in the EU Commissioner for Research, Science, and Innovation.

There are further opportunities to address trust issues by adapting how tech firms research and develop their products. As one example, the Facebook representative at the Forum explained their efforts to mitigate biases upstream in product development. They have already shifted to integrating stakeholder insight into internal processes much earlier in the product development stage. Not only does it create a deeper sense of engagement, but it also improves the ability of developers to create products that can be applied more responsibly and ethically. And because AI and machine learning are so influenced by human biases and flow, when researchers and developers can identify these issues and address them early on, they can deliver more reliable products (and hence improve confidence). Ensuring more diverse hiring of the engineers and developers that create the algorithms doesn’t hurt either.

AI Facebook panel Paris Peace Forum

Facebook presented its AI and Ethics research results, one of the projects selected by the Forum, to set the scene of this cutting-edge debate on AI governance.

What’s Next

Breaking down the myths and misunderstandings is urgent. The tech sector knows it. Governments know it. But still, they are mostly trapped in crisis-response mode. It’s time to go further.

There is a need for all stakeholders to get actively involved in educating the public and increasing awareness about how artificial intelligence can be a positive force, rather than a doomsday scenario. Multi-stakeholder initiatives around tech issues have been in place for years. And they are getting better about diversifying the representation at the table. But as a couple of panelists at the Forum’s debate warned, if we don’t improve how citizens are empowered in these conversations, then we may miss the mark both in terms of building and creating products that use AI to advance social needs in a fair and transparent way.

“Since this is just starting, every single individual can be a part of it. That’s the beauty of it. [Whether you are a tech expert or not] if you are trying to understand it, if you are trying to be a part of this future, you will have a place…The challenge now is getting people to rally behind that.”

Omar al Olama, Minister for Artificial Intelligence, United Arab Emirates

This is an ideal issue for international dialogue. Campaigns to deal with confidence around AI products, as well as literacy initiatives to educate the public and detangle the myths are relevant worldwide. It would be useful to exchange best practices across regions and sectors to keep pace with AI applications and the challenges of this societal transformation. (For example, cross-border exchange of impact assessments such as that of AI Now is always useful to share insight about public concerns and ideas for combatting distrust.)

Pervasive artificial intelligence technology will cut across borders one way or another. We need to work on creating an open, global narrative with stakeholders to advance positive rather than the negative use. The time to build public confidence is now.


Watch the full debate

Watch the full debate: Bits, Bytes and Governance Challenges – The Future of AI

Panel Participants

The Forum thanks panel participants: 

  • Antoine Bordes, Director AI Research Science, Facebook
  • Tabitha Goldstaub, Co-Founder, Cognition X and Chair, British Government Council on Artificial Intelligence
  • Ross Lajeunesse, Global Head of International Relations, Google; 
  • Carlos Moedas, European Commissioner for Research, Science, and Innovation
  • Omar Al Olama, Minister of State for Artificial Intelligence of the United Arab Emirates
  • Rashida Richardson, Director of Policy Research, AI Now Institute

The panel was moderated by Martin Tisne, Managing-Director, Luminate.


This is a publication of the Paris Peace Forum reflecting the debates at the Forum’s inaugural session in November 2018. It does not necessarily represent the conclusions of each individual participant.

Documents

>> Download this publication in PDF format


About the Paris Peace Forum

The Paris Peace Forum is an annual event aiming to push forward new rules and solutions to address the global challenges of our time. All actors of global governance are invited to join the Paris Peace Forum 11-13 November in Paris, France.

Subscribe to the newsletter

Day
Hour
Minute
Second
libero mattis Donec accumsan diam Curabitur id, id neque.
X
X