Along with Dr. Christian B. Westermann, Leader Data & Analytics and Partner at PwC Switzerland, Dr. Daniel Stauffacher, President, ICT4Peace and Zurich Hub for Ethics and Technology (ZHET) was invited by the WEF Global Shapers Zurich Chapter to provide expert input to th Pre-Davos Workshop for the WEF Global Shapers invited to attend the WEF Annual Meeting in 2020 in Davos. The session was prepared by Kaspar Etter and moderated by Nicolas Zahn.
The title of the Session was: “Technology Governance: How can we accelerate the societal benefits and ensure the responsible use of advanced technologies such as AI?”.
Some of the questions that were discussed were as follows:
-
What do you see as the current dominant governance model when it comes to AI governance?
-
What questions are forgotten? Which ones are overrated? Are we discussing the right issues?
-
What are major learnings from cybersecurity governance? What is going well, what not so?
-
Which recent trends make you optimistic about AI governance, which pessimistic?
-
Where do you see the role of the regulator when it comes to AI?
-
Are you worried about AI arms races (when it comes to LAWS or national intelligence, e.g. towards AGI)?
-
What can/should be done to strengthen international cooperation?
-
A notorious problem of AI is that peaceful/humanitarian applications are often so close to military applications (unlike chemical and biological weapons). For example, if a drone is good at search and rescue, it can easily be modified to be good at search and destroy. How can we reap the benefits while avoiding the downsides given such circumstances?
-
What can each stakeholder group (private sector, academia, civil society, public sector) do to work towards responsible AI? Where do you see most of the responsibility?
In the discussion around creating a future principled and inclusive AI Governance, Daniel Stauffacher invited the WEF Global Shapers to seriously look at the multitude of already existing Principles of and Guidelines for Responsible AI established by Companies, Academia, Civil Society, but also Governments and Intergovernmental Organisations, and take them as a starting point for building a global and inclusive AI Governance System.
He referred to three very useful analysis of over 80 established principles of responsible AI:
- Annex I in Regina Surber’s Paper on Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats (ICT4Peace Foundation, Zurich, February 2018)
- The global landscape of AI ethics guidelines by Anna Jaubin, Marcello Lenca and Effy Vayena, ETH. (2019)
- Principled AI: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI by Jessica Field, Nele Achten, HannahHilligoss, Adam Nagy, Madhu Srikumar, all Harvard Berkman Klein Center. (2020).
In the heated debate among the WEF Global Shapers, whether legally binding regulations for responsible AI are needed or whether they would hinder AI’s development and applications for economic and social benefit to humanity, Daniel Stauffacher recommended to look at examples, where the International Community has applied “Smart Regulations” (see also on page 2 of Report by the Swiss Government Expert Group on AI «Internationale Gremien und künstliche Intelligenz» 15. August 2019).
Finally, Daniel Stauffacher recalled that ICT4Peace, in its input to the Report of the UN High-Level Panel on Digital Cooperation recommended, that UN the “become an anchor of ethics in an AI world. Ethics around innovation, including in particular machine learning (ML) and AI driven decision-making are of increasing importance – what are the overarching considerations in pushing for AI if, without governance, it can be used for hate, hurt and harm? How can the UN emerge as a global ethics anchor in the AI space? What can the UN do to provide algorithmic oversight on ethical grounds, as well as ensuring rights and privacy of individuals aren’t violated because of big data investments.