During the 2019 Group on Governmental Experts Meetings on Lethal Autonomous Weapons Systems (LAWS) at the UN in Geneva, 25-29 March 2019,  Regina Surber of ICT4Peace and the Zurich Hub for Ethics and Technology (ZHET) participated  in a “Panel on AI and Civilian, Trans-disciplinary and International Perspectives”.
 

The Topic of the Panel was: The increasing capabilities of autonomous and intelligent systems raise questions about consequences on society, international cooperation and stability. The side event’s goal was to highlight the importance of interdisciplinary, civilian participation in discussions surrounding LAWS and ethical AI development.

Other participants in the panel were: Col. (GS) Bruno Paulus (opening remarks):Military Advisor, Permanent Representation of Germany to the CD, Catherine Chen (moderator): ConsciousCoders, Student, Computer Science, LMU Munich, MPI Tuebingen, UC Berkeley, Gunnar König:ConsciousCoders, Researcher in Data Science, LMU Munich, Dominique Paul:Project A Ventures, Student, Data Scientist, University St. Gallen, Adrian Krüger:Philosophy, Computer Science Centre Digitalisation Bavaria.

The recording of Regina’s interventions can be found here.

ICT4Peace’s Regina Surber’s main points were:

  1. ICT4Peace’s focus:

1)     ICT4Peace focuses not only on peace and security implications of AI and other emerging technologies during war, but also outside of the war scenarios -> peace-time threats;

2)     As the challenges arising from emerging technologies are not only international, but inherently local and citizen-based, ICT4Peace opts for policies that bring the individual human being back into the epicenter of security considerations.

  1. Peace-time threats (PTTs):

PTTs are effects of emerging technologies on the individual and society that are subtler than LAWS, potentially permanent, and highly transformative. They raise questions about the human self-understanding, the role and make-up of social regulation, and the way society perceives the individual human.

Examples:

  • Fake news. Do we need a human right to true information?
  •     Data geopolitics: The human seems to fade into irrelevance behind the economic and political interest in his data. Risk of exploitation of vulnerable communities -> increase of global inequality (vs. SDG 10).
  • Life-enhancement technologies: IoT will move from augmenting the human to invading the body. This raises new concerns on ‘hacking’, and may require new methods of securing physical integrity.
  • Highly automated/autonomous profiling: When we a priori categorize citizens based on their potential for criminal conduct, do we witness a reversal of the presumption of innocence?

We need to promote digital human securityduring war and during peace-time through many levels of society and government.

  1. Possible avenues for future policy development and strategies:

(1)   International level: There is a need to understand how emerging technologies converge into new weapons systems and weapons enhancements, which also leads to interconnection of ‘classical’ weapons categories. Separately analyzing and regulating different and currently pre-set weapons categories – nuclear weapons, cyber-weapons, autonomous weapons, biological weapons – might not prove to be effective (anymore). It could be advisable to create permanent international scientific expert groups for different weapons areas or technological sectors, that can continuously inform diplomatic debates, and that also regularly exchange on how their technological fields converge;

(2)   National level: Governments need to understand that the ‘digital world’ is an infrastructure like any other – if not the most important one. Currently, major tech companies are starting to create ethical principles (privacy, data security, transparency) – principles that are safeguards of basic rights, which are often guaranteed by national constitutions. Consequently, the tech sector is deciding on the legality of limits and potential violations of those basic rights, and not governments. It would be advisable to create constant polity-technology interfaces, e.g. through state departments for technology, that creates the understanding that governments need in order to meet the new social and political needs.

(3)   Bottom-up approach: (As emerging technologies – as many other technologies – are dual-use, criminalizing them will also limit their tremendous potential for Good. Hence, bans or prohibitions are no practicable long-term strategy. As long as man (or a state) feels insecure, or has the curiosity to harm another, dual-use tools will be used for this end. Consequently, we need to strive towards an altering of the human (or state) wish to harm. As regards emerging technological tools, this requires an awareness creation of the new technological environment we live in. This could be partly achieved through responsible technological research, e.g. via fixed ethical guidelines for different technological fields, or via sensitizing young researchers to ethical questions and social implications of their own research. Education must offer a toolkit to go about those ethical questions, so that graduates have the competence to answer them in their later day-to-day work. Arguably, sensitization of ethics, social questions and individual responsibility must be included even at an earlier age in primary school. As technological tools start to shape our environment without our input, we must build early stage reflection on the importance of individual human control.

Regina’s report reflecting the points made in this panel can be found here.

The recording of the panel can be found here.