Dean’s Society – October 23, 2006; Stuart Russell

The ICT for Peace Foundation and the Zurich Hub for Ethics and Technology hosted a colloquium on 9 June 2018 with Stuart Russell, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley, USA.

Professor Stuart Russell, one of the world’s leading experts and thinkers on Artificial Intelligence and it’s broader implications shared his views on Lethal Autonomous Weapons Systems (LAWS), defined by the UN as weapons with the ability to locate, select and eliminate human targets without human intervention.He accused the AI community overall of being asleep at the wheel and not taking its humanistic responsibilities seriously enough. The autonomy we are now giving to weapons systems is essentially the same that is given to a chess program. We tell the program to win, we don’t tell it how to do it. The program is continuously making decisions based on moves and things that happen during the game; these decisions being contingent on its own perceptions. From chess to warfare- this is how we have to envisage a future with LAWS, an autonomous system taking its own decisions about selecting and eliminating, in this case, human targets.

Professor Russell highlighted the problematic of LAWS, their scalability and their characteristics as potential weapons of mass destruction (WMDs). “Because they do not require individual human supervision, autonomous weapons are potentially scalable weapons of mass destruction (WMDs); essentially unlimited numbers can be launched by a small number of people.” Russell estimates that approximately “one million lethal weapons can be carried in a single container truck or cargo aircraft, perhaps with only 2 or 3 human operators rather than 2 or 3 million. Such weapons would be able to hunt for and eliminate humans in towns and cities, even inside buildings. They would be cheap, effective, unattributable, and easily proliferated once the major powers initiate mass production and the weapons become available on the international arms market. “ Attacks can easily be scaled upwards, without the mess and destruction of nuclear weapons or other WMDs, making it easier to justify their use. See also Stuart Russell, “The new weapons of mass destruction?,” Security Times, February 2018, 40-41.  In Russell’s opinion, LAWS cannot be used in ways that comply with existing International Humanitarian Law. They cannot be held accountable as no significant personnel is required to operate them. It is also currently impossible to defend against LAWS; anti-swarm defence, as far as is publicly known, has not had any real success to date. The current discussions at the UN in the context of the CCW GGE have severe limitations in assessing the full spectrum of the implications of LAWS and also potential malicious links with other new technologies.

With regards to the overall implications of AI, Professor Russell is of the view that the benefits to humanity can be enormous. However, proper partnerships and safeguards need to be put in place.

“If we achieve probably safe, super-intelligent AI, the potential benefits to humanity will be unlimited. Viewing this as a race for national supremacy is likely to lead to unsafe AI systems and misuse of AI that could be catastrophic. We need cooperation, not competition. We can all have a fair share of an essentially infinite pie, or some nations could have a dominant share of a non-existent pie.”

As a final thought, Professor Russell argued for the right to “true speech” not just “free speech”. We live in an information economy, in which true speech should be a fundamental human right. A first step in this direction would be an international agreement of some kind to force machines online to identify themselves as such. People would then be able to better assess the validity of information and interactions online.

The ICT4Peace paper on AI, Lethal Autonomous Weapons and Peace Time Threats by Regina Surber, Senior Advisor, ICT4Peace, served as a background paper for this workshop, and can be found here. The recording of her lecture on the same topic can be found here. The Workshop was designed and prepared by Barbara Weekes, Senior Advisor, ICT4Peace Foundation.

More information on the work of ICT4Peace and ZHET on Artificial Intelligence, LAWS and Peace Time Threats can be found here.
 
In particular see the video recording at Rightscon in Toronto of the ICT4Peace and ZHET panel on the same topic.