Regina Surber and Daniel Stauffacher published the following guest commentary in the Neue Zürcher Zeitung (NZZ) on 19 September 2018. Regina Surber is a Scientific Advisor to ICT4Peace and the Zurich Hub for Ethics and Technology (ZHET). Daniel Stauffacher is President of ICT4Peace and ZHET.

ICT4Peace demands an urgent public debate and regulatory engagement regarding artificial intelligence by the Swiss people and authorities.

The USA and China are currently investing heavily in the development of artificial intelligence (AI). As the experiments with self-driving cars show, the current potential of AI is still limited. However, that can change quickly. It is urgent that we prepare ourselves in time.

The technologies emerging from artificial intelligence (AI) research help banks digitize, resolve judicial cases, allow the coordination of drone “swarms”, underpin the smart network structure of any Internet service provider and sit on our laps as robotic dogs. AI-supported technologies have thus quietly become the substructure of our society, justifying the hype around the two letters. However, some talk about AI, without really knowing what it’s all about and how massive both the potential and the risks of AI are for people and society. The risks urgently require appropriate government measures. AI has to be controlled by people and guided in the right direction.

On the one hand, AI research is about creating software and hardware that has features of human intelligence, such as problem-solving abilities or learning. On the other hand, AI refers to the informal capability of software or hardware that generates the above-mentioned intelligent features, such as the ability of software to autonomously drive a car. AI can be considered as a commercial resource as well as the foundation for prosperity. It has considerable political weight.

High-risk transformation
Today’s AI is called “weak” AI because it can only solve a single task well, such as face recognition. “Strong” AI, in contrast, would demonstrate an intelligence comparable to humans. “Artificial Superintelligence” refers to AI that would have a superior intelligence to humans. Some experts believe that strong AI can be produced within the next 75 years, others dismiss it as science fiction.

AI is a driver of high-risk social transformations: Autonomous weapons can be scaled down to insect size and in large numbers become very cheap and intelligent weapons of mass destruction. Warfare is then no longer a battle between soldiers, but rather system confrontation both at the electromagnetic level and in cyberspace, where autonomous cyber weapons play a major role. Intelligent software can also create artificial pathogens.

In addition, distorted data leads to distorted AI software outputs, which has already resulted in racist judicial decisions in the USA. Thus, social stigmata are reproduced by means of technologies whose decisions are not traceable in individual cases and are difficult to challenge. False, incomplete and mass information leads to the loss of a climate of truth within society, raising the question of whether we indeed have a right to truthful information.

Act Now
These developments in AI call for the immediate involvement of politicians, academia and civil society. First, a sound public debate about the social impact of AI-supported technologies is imperative. Secondly, AI research must be ethically embedded, which is why University Chairs for ethics and technology must be created. This is currently being discussed at ETH Zurich. The private sector must also be included as today’s main investor in AI.

Third, the structure of our national political institutions and dialogue on AI must adapt as quickly as possible to this paradigm shift before it becomes too late or technically too complex. In a first step, this function could be carried out by a senior delegate of the Swiss Federal Council for Technology Issues. Fourthly, it needs to be clarified whether algorithms that violate citizens’ privacy and even the bodily integrity of citizens – in other words, autonomous weapon systems – in fact violate our fundamental rights that are enshrined in the constitution, and therefore should be discussed and possibly banned by parliament.

The text in French of the article in NZZ can be found here. The above English text can be found here.