Regina Surber, Advisor ICT4Peace Foundation, presented on 13 November 2017 at ETH Zurich current research of ICT4Peace and Zurich Hub for Ethics and Technology (ZHET) on Lethal Autonomous Weapons Systems (LAWS), a special application case of Artificial Intelligence, whose use for humanity is highly debated: The focus of the lecture was on the main challenges of the discussion of LAWS at the United Nations as well as the missing key aspects of Autonomous Cyber Weapons and Autonomous Weapons during peace time.
In 2016, ICT4Peace with partners at ETH and Zurich based Foundations launched the Zurich Hub on Ethics and Technology (ZHET), which provides a space for knowledge-generating discussions that bring together experts in machine learning technology and robotics, ethicists and experts of international humanitarian and human rights law, as well as representatives of the public and private sector.
The summary of lecture of Regina Surber can be found in this paper and heard in this audio file below. You can also listen to the podcast directly here.
Regina Surber in her lecture at ETH explained inter alia:
Research on Artificial Intelligence (AI) – the simulation of human intelligence processes through computer software – has enabled humanity to create software and software-systems which exhibit a level of intelligence that can make them perform tasks as well as to learn new tasks without human guidance, observance, or intervention. Such so-called increasingly autonomous intelligent agents can be purely software, or integrated into a physical system – a robot. Besides potentially promising applications of increasingly autonomous intelligent systems (e.g. self-driving cars, ISABEL in medical diagnostics), those software agents can be (and arguably already are) integrated into robots that can identify, select, track, and attack a (military) target (e.g. combatants and infrastructure) without a human operator. Often-called Lethal Autonomous Weapons Systems (LAWS), these systems have been taken up as an issue by the international arms control community in the framework of the United Nations Convention on Certain Conventional Weapons (CCW) in 2014.
After a series of annual informal discussions, a Group of Governmental Experts (GGE) has debated on the subject matter for the first time during a 5-day-gathering in the CCW framework in Geneva in November this year. The main points of discussion of the GGE were the potential legality under International Humanitarian Law (IHL) of such weapons systems, questions of accountability and responsibility for the use of LAWS during armed conflict, potential (working) definitions of LAWS, as well as the need for emerging norms, since LAWS highly challenge both existing law (IHL) as well as normative principles.
However, to date, States neither agreed on a definition of LAWS, nor on the fact whether increasingly autonomous weapons systems or precursor technologies already exist. Moreover, national as well as international policy debates on LAWS have lacked precise terminology. Hence, there is a strong need to develop or have better technical understanding in the political debate. This becomes even more imperative due to the rapid pace with which autonomy-enhancing technologies advance.
Furthermore, the CCW’s discussion on LAWS has focused on conventional (physical/ robotic) systems which interact in a 3D reality with other machines or humans. However, autonomous software agents which act entirely in the cyberspace are of great military interest as well. The use of autonomy for intangible cyber operations (defensive or offensive) could be decisive and much more economic in current/future warfare.
In addition, the CCW is a framework underpinned by IHL, which narrows the debate’s focus on weapons and their use during armed conflict. However increasingly autonomous weapons systems can be and are used during peace time in law enforcement operations (e.g. crowd control, hostage situations), where International Human Rights Law (IHRL) represents the legal benchmark. Compared to IHL, IHRL is much more restrictive on the use of force. One may assume that once the advantages of increasingly autonomous systems have been proven in the military context, they might be considered for use during domestic law enforcement, although IHRL, regulating the latter, would prohibit their use. Therefore, the CCW’s/GGE’s approach could be criticized as not being legally comprehensive enough due to its limited focus on the use of a weapons during times of war. However, the risk of the use of autonomous intelligent agents during peacetime is not limited to the lack of a legal review based on IHRL. Other points for discussion should be:
Mass disinformation generated by intelligent technology: For example, both Fake News (deliberate misinformation via traditional or online media with the intent to mislead the readers) and Internet Trolls (the posting of erroneous, extraneous and off-topic messages in order to manipulate public opinion) could potentially be generated by autonomous intelligent agents, which could lead to mass disinformation guided entirely by autonomous intelligent agents.
Autonomously generated profiles: Computerized pattern and correlation recognition in order to identify and represent people, for example during criminal investigations, could be performed by autonomous intelligent agents. The detection and capture of potential (pre-emptive profiling) and actual criminals (e.g.) could be outsourced to increasingly autonomous machine calculation based on Big Data – uncontrollable for humans. Already today, so-called Deep Learning Mechanisms – a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain – allow for ever-more perfected facial recognition. Facial recognition technology is a computer application capable of identifying and verifying a person from a digital image or video. Through increasingly autonomous criminal profiling the border between a criminal and a legally innocent person would be drawn exclusively by an algorithm, and vulnerable to incorrect data due to bad sensor-technologies, incompleteness, noise and the like. Furthermore, categorizing potential criminals based on computational inferences somehow turns the presumption of innocence upside down, assuming a general potential for criminal conduct.
Autonomous technology in light of emerging resource-scarcity on our planet: The current global social, economic (including financial and monetary) and environmental trends constitute a high risk to humanity and make our present global human coexistence potentially unsustainable. Some experts ask the question: In an increasingly unsustainable society in critical times, what kind of citizens should be protected, and whose lives could be sacrificed? Should a Citizen Score Card, representing a value of an individual citizen from a governmental perspective, become the reference point of informing such decisions?
Read more here.