It is the third year that the ICT4Peace Foundation is taking part at discussions on Lethal Autonomous Weapons Systems (LAWS) at the Convention on Certain Conventional Weapons (CCW) at the UN in Geneva. The Group of Governmental Experts (GGE) representing 75 countries are convening between 27 and 31 August 2018 for their second week this year to discuss emerging technologies in the area of LAWS. It is the fifth year of debates on this topic under the auspices of the UN. ICT4Peace was represented by Regina Surber, Advisor, ICT4Peace Foundation.
During this week, state representatives are debating four agenda items: (1) the potential military applications of emerging technologies in the field of LAWS, (2) what characterizes a LAWS, (3) if and to what degree a human element should and could be secured in the use of lethal force, and (4) possible options to address the humanitarian and international security challenges posed by LAWS.
The general goal is to promote a common understanding of the topic at hand in order to decide on the appropriate steps for potential means of regulation. Whereas opinions are still divided on many points, including the question whether LAWS do already exist or not, state representatives generally agree that ‘some’ human element must be secured within the decision-making process of the use of lethal force. Due to the complexity of the matter and diverging perspectives, this week’s discussions will most likely not distill into concrete resolutions. Policy options may rather be expected for next year’s debates. Currently, 26 states support a ban of LAWS, an idea elaborated and promoted by the Campaign to Stop Killer Robots since 2012.
LAWS are a manifestation of the security risks of artificial intelligence (AI) and other emerging technologies. The CCW’s in depth-analysis of the development and use of LAWS during armed conflict is therefore highly necessary. However, the CCW’s mandate does not include autonomous intelligent agents that can act as weapons within the cyberspace and that are also potentially of tremendous military interest. Whereas both LAWS and autonomous agents in the cyber space are characterized by AI-enabled technologies, international policy discussions on cyber security up to now have taken place in a different forum at the UN in New York. This issue was also correctly raised by UNIDIR’s publication from 2017. There is perhaps a need to assess these issues in a more collaborative way in order to ensure that all security issues are adequately addressed.
Moreover, ICT4Peace repeatedly highlights that AI and other emerging technologies, such as biotechnology, molecular nanotechnology or 5G, can also pose systemic risks to global society when not weaponized. Examples of these Peace-Time Threats are the risk of biased data creating, e.g., biased court decisions and thereby reproducing underlying social stigmas, or deceitful social bots which lead to an even stronger blurring of borders between real and artificial knowledge during the age of mass and individual mis- and disinformation. Such developments urge us to ask crucial questions, such as Should humans create technological tools of which over which they cannot have control anymore? Do we, for reasons of self-preservation, have to assume human distinctiveness in relation to machines, before science can prove otherwise? Do we need to establish the right to true information as a basic human right?
Works by the ICT4Peace Foundation, together with the Zurich Hub for Ethics and Technology (ZHET) on this topic include, i.a., the paper ‘Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats’, by Regina Surber, Advisor, ICT4Peace as well as the panel discussion at RightCon in Toronto in 2018 on the same topic.
Download and read report on and analysis of the conference by Regina Surber here.