“We are engaged in a world war of stories – a war between incompatible versions of reality – and we need to learn how to fight it.” Salman Rushdie, Knife
These words by celebrated author Salman Rushdie are from his address to an international gathering of writers at the United Nations in May 2022 was after a horrific assassination attempt on him. His book Knife is a gripping account of the attack, and the toll it took. The attacker is never named by Rushdie, and just called ‘A’. In a study of how he was radicalised, Rushie goes on to note, later in the book that the knife used in the attack was akin to technologies splintering reality today. He writes that ‘A’ is “… wholly a product of new technologies of our information age, for which ‘disinformation age’ might be a more accurate name. The groupthink manufacturing giants, YouTube, Facebook, Twitter, and violent video games were his teachers”.
It is unclear exactly what influence social media, and computer video games had on Rushdie’s attacker, despite claims he was angry with the author after reading a couple of pages of The Satanic Verses and seeing some clips of Rushdie on YouTube. But this admission is revealing.
Given the worldview, and murderous act these videos influenced, it challenges us all to realise that critical thinking, the value of shared facts, and in turn, a subscription to shared values is uncertain, and diminishing in today’s world. Narratives – the stories we tell others, and ourselves – are central to humanitarianism. Rushdie correctly believes that we now confront a ‘war of stories. Academics call this ‘epistemic decay’, because what these competing stories, produced at an industrial scale, and promoted by influential figures ultimately do is to create, and sustain disparate micro-realities – ways of seeing the world often at complete odds with grounded, evidence-based, factual narratives.
We are already in a world where reflexive engagement with what’s presented through black-box algorithms shapes the attitudes, perceptions, beliefs, and practices of billions of people. It is a rapid worsening of what Linda Stone identified as far back as 2008 and called ‘screen apnea’ – a process through which online content acts as a stressor, impacting physical, and mental health over the long term. Today’s social media continually informs us about many things but educates us about very little. Our minds can’t cope. Anxiety gives way to fatigue, lowering cognitive defences, and allowing disinformation to take root.
If the integrity of information, and veracity of media societies consume is continually compromised, how can humanitarians establish urgent needs, the rules of war, international humanitarian law, and mobilise support? If truth is conflated with trust, and those entrusted with sensemaking are also those using disinformation to deflect responsibility, how can humanitarians deal with the subscription to fictional realities millions completely believe in, and act upon? What happens when these stories target aid workers?
Compelling stories are humanitarianism’s cornerstone. We are moved to act based on what we consume – and this is true of policymaking as much as it is of donations, and volunteerism. But we now live in a state of permanent polycrises – where it is impossible to keep of track of all humanitarian emergencies, even as they grow in complexity, and geographic dispersion. To share the story of Gaza, is to rob attention from the Sahel, Syria or South Sudan. Furthermore, to share the story of any one of these places is to now always compete with, and most often lose to a tsunami of content on wellness, entertainment, sports, and the hot takes of influencers on every imaginable topic. Algorithms target this content to users based on the maximisation of profit – an equation that rarely factors in, and is often inimical to humanitarian ideals, and needs. Virality captivates. Veracity is no longer key to what’s shared the most.
Generative AI will accelerate this epistemic decay. Effective crisis communication is key to humanitarianism. The production of such content is in turn based on the assessment of material around a specific context, community, country, emergency or disaster. The determination of what’s accurate, and what’s misleading, partially or wholly untrue is not just becoming harder. Generative AI will make it impossible to diligently carry out what analysts in the humanitarian sector do today to maintain information integrity. I call this a cognitive-informational battlefield. Conducted online, and through social media, this is a war to instrumentalise the interconnectedness of cognitive processes, information ecosystems, and emotional responses such that it influences offline responses. Those who control this continuum have the power to shift how millions see the world, and what they go on to do (or not). Humanitarians are in the middle of this war, whether they realise it or not.
Distorted realities lead to divergent action. Humanitarians will find less and less traction around the causes they champion, and desperate conditions they highlight. Stories will also target humanitarians individually, and institutionally. This narrative targeting will have offline, kinetic consequences – with Rushdie’s attacker a grim reminder of how online content can shape strongly held beliefs and motivate violent acts. If nothing can be believed, everything will be rejected. A context where everything is disbelieved helps impunity, creating conditions ripe for the persistence of war crimes, and crimes against humanity.
Almost a decade ago, at re:publica 2015 in Berlin, I noted that “There always needs to be an ethical rights-based perspective to the technologies we champion, otherwise the outcomes for the best of intentions may be very far removed from what we desire to see…”. This continues to hold true. Years before contemporary threats to information integrity, and the rise of generative AI, I noted at the same conference that the central challenge for peacekeeping (and peacebuilding too) is “…how to deal with the multiplicity of voices on the ground because the democratisation of technology has happened to such a degree that potentially everybody… has a potential voice”. My cautious note then is now more fully realised through media cacophonies at scale that distract from, deny, and decry ground truths published on credible humanitarian channels.
Nothing outlined in this note are frontier possibilities. They are all ferocious front door threats and growing at pace. Bien pasant attitudes in the sector will be increasingly at odds with the realities of operating in conditions of significant epistemic decay. Informed generalist approaches, lateral thinking, and agile solutions grounded in the study of information ecologies will need to be quickly established, and iterated. The best solutions will lie, ironically, in the strategic, measured, meaningful adoption, and adaptation of the very technologies that undergird what Rushdie called our “disinformation age”.
To understand their power, and deconstruct their misuse, is to help craft broad interdisciplinary approaches to information integrity. These frameworks will need to explicitly include psychology, neuroscience, and emotional intelligence. Resilience, and remediation are possible, but will only come about through informed action, and strategic leadership. I hope the IFRC, and sector realise this. Though exacerbated by technology, these are, fundamentally, challenges about vulnerable people. Stories continue to matter. And we need better ones to protect those who are the most at risk, and vulnerable.
Dr Sanjana Hattotuwa, Special Advisor, ICT4Peace
Sri Lanka, 23.12.2014
This article was written for the International Federation of Red Cross and Red Crescent Societies (IFRC)
See the article as a PDF here.