Regina Surber speaking at Milano Human Rights Film Festival

Together with Professor Maria Chiara Carrozza, Regina Surber of ICT4Peace and Zurich Hub for Ethics and Technology (ZHET) participated in a Panel on „Technology off Limits” at the Human Rights Film Festival in Milano on 4 Mai 2019.

The panel discussion has been recorded (the sound starts after 5 minutes):

The  main points of Regina’s inputs to the discussion can be summarised as follows:

  • Autonomy for complex technological processes is not a category, but rather a spectrum. Hence, there is no clear definitional boundary between automation and autonomy.
  • As the technological processes behind the learning capabilities of autonomous systems is of enormous complexity, an autonomous system‘s output/calculation is not only unpredictable due to its very capacity for human-independent action, but also difficult to explain in retrospective. It follows that when a Lethal Autonomous Weapons System (LAWS) makes a mistake, e.g. if a LAWS kills a civilian in an international armed conflict, which is criminalized by IHL, we cannot explain or understand why. Hence, deciding who is responsible or legally liable for the harm created by a LAWS is hard to figure out (military commander, software engineer…).
  • Socially, LAWS create an interesting new phenomenon: we are outsourcing control and responsibility of one of the most powerful and most destructive human capabilities to software or machines, thereby limiting the space for human responsibility in the world.
  • It is important not to mistake LAWS with an anthropomorphic picture of a ‚killer robot‘. Arguably, autonomous intelligent cyber agents are the most decisive use of autonomous technology in future warfare. Moreover, the risk of of collaborating mini-LAWS (swarms) and their potential to become scalable weapons of mass destruction should not be underestimated. What is more, we have to keep in mind that future wars will most probably not rely on human and physical machine power, but on systems confrontation and destruction.
  • Artificial Intelligence-enabled technologies are not only affecting people in situations of war, but have effects on anyone leaving the smallest digital trace in our world. E.g., AI-enabled technologies make it easier to manipulate the public through bots, trolls, deep fakes, or psychometric manipulation. Further, as AI-enabled technologies need data to function, and as AI can be leveraged for political and economic interests, the individual human being might be fading into irrelevance behind the data that he/she – often unknowingly – produces. This may raise new questions and concerns about the role and value of the individual in and for political communities, as well as questions about data ownership and privacy.
  • In a world where technological tools increasingly shape our daily environment, it is important to start educating already young children and youth about the importance of human-based creation and the  responsibility of the individual person for the creation. Children must learn that the way our society functions is based on ideas that we humans have come up with at some point, and that those ideas can be changed – by humans.”

Please see Regina Surber’s 2018 publication: “AI, Lethal Autonomous Systems and Peace Time Threats”

And here you find her most most recent paper (2019): “Managing the risks and rewards of emerging and converging technologies: International cooperation, national policy and the role of the individual“:

Regina’s op-ed published in the Neue Zürcher Zeitung (NZZ) on 24 April 2019 on the outsourcing of fundamental rights by Governments to the private sector.

Regina Surber and Daniel Stauffacher published the following guest commentary in the Neue Zürcher Zeitung (NZZ)on 19 September 2018.

Regina Surber gave a lecture on Autonomous Intelligent Software Agents, LAWS and Peace-Time Threats on 16 January 2017 at the SwissCognitive Tank hosted by Ringier, Zurich. The full lecture can be viewed on Youtube here.