As part of the UN Secretary General’s Digital Cooperation Recommendations Follow-Up Process, to which ICT4Peace has contributed the following Reflections and Recommendations , ICT4Peace has been invited to participate as one of the  “Key Constituents” in the Recommendation Roundtables: 3C Artificial Intelligence. (ICT4Peace is also contributing to Recommendation Roundtables 4 Global Commitment on Trust and Security. 3A/B Digital Human Rights).

The roundtables will provide inputs and advice on the status and feasibility of advancing the SG’s report’s various recommendations, so as to inform the development of a Roadmap on Digital Cooperation which the UN Secretary-General will present to Member States in Spring 2020.

ICT4Peace and ZHET have been working on the issue of principled and inclusive AI Governance since several years and recently participated in a workshop with the WEF Global Shapers Meeting in Zurich on the same topic.

In the discussion around creating a future principled and  inclusive  AI Governance, Daniel Stauffacher invited to seriously look at the multitude of already existing Principles of and Guidelines for Responsible AIestablished by Companies, Academia, Civil Society, but also Governments and Intergovernmental Organisations, and take them as a starting point for a conversation on building a global, principled and inclusive AI Governance System.

He referred to three very useful analysis of over 80 established principles of responsible AI:

Daniel Stauffacher recommended to look at examples, where the International Community has applied  “Smart Regulations” (see also on page 2 of Report by the Swiss Government Expert Group on AI  «Internationale Gremien und künstliche Intelligenz»  15. August 2019).

Finally, Daniel Stauffacher recalled that ICT4Peace, in its input to the Report of the  UN High-Level Panel on Digital Cooperation recommended, that UN the “become an anchor of ethics in an AI world. Ethics around innovation, including in particular machine learning (ML) and AI driven decision-making are of increasing importance, and that the following questions are asked: What are the overarching considerations in pushing for AI if, without governance, it can be used for hate, hurt and harm? How can the UN emerge as a global ethics anchor in the AI space? What can the UN do to provide algorithmic oversight on ethical grounds, as well as ensuring rights and privacy of individuals aren’t violated because of big data investments.

Other key recommendations on AI and AT in the ICT4Peace contribution to the High-Level Panel :

  1. A creation of a UN level body for technology and AI, with the tasks of ensuring responsible technological research and discussing peace and security implications of emerging technologies, i.a. AI and AT, biotechnology, 5G, molecular nanotechnology. This body would also set principles for responsible research in the above-mentioned scientific fields and coordinate implementation.
  2. An inclusion of autonomous cyber weapons and autonomous weapons during law enforcement into international discussions. The former could be integrated into the GGE on LAWS, and the latter could be taken up by the Human Rights Council.
  3. Look beyond the issues of AI and Autonomous Weapons Systems (LAW) and consider also the short, medium and long term “Peace Time Threats” for Society.
  4. Foster a public discussion of the human-machine analogy and further the dialogue between tech experts, civil society and government.
  5. Launch a debate on property rights for source codes of AI and AT software.
  6. Encourage the increased engagement of civil society, including the private sector and academia, on the questions of human control of, and responsibility for technological outcomes.