ICT4Peace Foundation’s Special Advisor Sanjana Hattotuwa was invited by New Zealand’s Department of Prime Minister and Cabinet to write this policy brief on the occasion of He Whenua Taurikura, New Zealand’s first annual hui (meaning a large gathering in Maori) on countering terrorism and violent extremism. The hui was held from 14-16 June 2021 in Christchurch. He Whenua Taurikura translates to ‘a country at peace’.

Download the policy brief as a PDF here.

###

Aotearoa, New Zealand will face increasingly sophisticated campaigns to seed and spread anxiety, fear and anger, both online and offline. These campaigns will emerge from or be amplified by political entrepreneurs from within the country and outside it. Inoculation against this democratic erosion – such that it exists at present – risk diminishing returns over time in the face of iterative, intentional and unrelating “everyday campaigns” across a range of issues, including but not limited to partisan politics, proposed and existing laws, bi-cultural relations, health, elections, infrastructure and jobs. As the scope, scale and speed of disbelief grows, trust in democratic institutions, including electoral outcomes, will decrease. No electorate is immune, and what is a possible future scenario for Aotearoa, New Zealand is well entrenched in other countries which are now templates for engineering democratic deficit.

The long-game of anti-democratic architects is to weaponise scepticism. Like a digital Novichok, the manner in which society sees itself, negotiates difference, communicates with each other, deals with the past, and envisions the future – and an individual’s or community’s place in it or ownership of it – can be corrupted through online content and social media platforms. Unlike a nerve agent however, which has an immediate and visible physiological impact, through influence operations conducted over time, the tone, timbre and thrust of divisive frames can become the foundations of political and social discourse. Sociologist Diane Vaughan called it “the normalisation of deviance” in relation to what caused the Challenger Space Shuttle disaster in 1986. Over time, individuals can come to accept a problem as a feature, instead of an aberration. The bad actors become those amongst us – our extended family, friends and neighbours – who come to believe in things we can no longer identify with, or subscribe to. It is, ultimately, the weaponisation of PM Jacinda Ardern’s “They Are Us”, through the strategic, systematic and sustained dismantling of democratic ideals, institutions and processes. Without a consistent, clear or common enemy, existing strategies to safeguard Aotearoa, New Zealand from democratic decay risk failure, and at a pace quicker than many in government, media and civil society expect or plan for.

The perspectives in this policy brief are informed by two inter-related drivers – one, the lived experience of negotiating violent conflict in Sri Lanka since 2002, including responding to online manifestations of offline violence for over a decade and, two, doctoral research looking specially at the role, reach and relevance of Facebook, Twitter and social media in simultaneously fuelling and quelling socio-political violence. This research included how online content is inextricably entwined with and informed by offline developments including but not limited to communal riots, significant political unrest, high-casualty terrorism, and consequential electoral moments.

The point I seek to stress is a simple one. Coming from, and calling home a country that is, in every imaginable way and every day, profoundly more violent than Aotearoa, New Zealand in most touch points for citizens, and especially those from minority communities, I viscerally appreciate the symbolic invocations and implications of statements by political entrepreneurs or their proxies. Sometimes called dog-whistling, the reach and resonance of references intended for specific audiences is a code that if and when cracked, provides vital insights into intent, motivation and strategy of despotic innovation. However, echoing what Polish-American scientist and philosopher Alfred Korzybski’s remark that “the map is not the territory”, disinformation’s social and political impact is more complicated than just the study or presentation of big data.

Data can help show us what’s going on, but not unlike Rorschach blots, resulting visualisations only make sense when read in specific contexts. Words like online extremism and digital world tend to project violence as predominantly determined by digital content. The telos of this gaze – which has served democracies well but is no longer fit for purpose – is to see legislative instruments, laws, the codification of regulations and punitive measures as adequate, desirable or definitive responses for disinformation’s Hydra-headed entrenchment, expanding at pace. Informed by lived experience, activism, and research, I study online data in situ, seeing digital interactions as inextricably entwined with local cultures, histories, communities, media ecologies, political cultures, anxieties and aspirations.

Consequently, I argue that disinformation goes to the heart of who we are, what we believe in, love to do, and why. It is an existential inquiry and exercise, not (just) a digital study or phenomenon. By its very nature, disinformation is socio-technological, being offline in nature as much as it is increasingly online in nurture. It follows that disinformation requires systems or lateral thinking to grasp, beyond technocratic or bureaucratic frames. While appreciating their role, I argue that we must be sceptical of all legal or legislative responses to what are essentially, and will remain, socio-political problems present in online and offline forms, simultaneously.

Why is an inter-disciplinary, broad spectrum approach vital to safeguard democracy? Even as legislators seem convinced they have a handle on fake news and hate speech definitions, researchers grapple with the morphology of content inciting hate and violence. Hate, harm and violence are, in fact, often very hard to assess. Digital content is iterative and requires contextual knowledge to understand the implications of. Cross-pollination is the norm, where engagement on one platform leads to variations of the content and commentary on another – an inosculation that sees digital hate grow in tandem to offline developments. With each opportunistic migration from one app, platform or vector to another, frame, function and form of content changes. The speed, scale and scope of this migration and morphing has long overtaken the imagination of policy makers, most regulators and even social media companies, resulting in an everyday tsunami of content that defies meaningful oversight or rapid response. Furthermore, ambiguity is now a strategic choice, where content that resides right at the borderline of what’s prohibited by social media platform serve as sufficient signals for followers to amplify specific messages, including targeted hate. Political and media entrepreneurs in the Global South are now joined by those in Europe, US and Global North in instrumentalising social media platforms as bully-pulpits or manic megaphones.

This pulsating pathology of disinformation – that’s far more complex than this snapshot – already resides within and outside Aotearoa, New Zealand, and there is no erasing or eradicating it. Disinformation, in its most insidious, liminal and porous forms, is contemptuous of sovereignty and borders. Every single internet connection at home or work, phone or PC, is a vector for harm, hate and violence. From multiplayer games, self-hosted group chats, private and decentralised cloud services, specific game console communities, augmented and virtual reality domains, the appropriation of emojis or memes to communicate hate, encrypted messaging, private groups and the dark web, disinformation actors and misinformation architects already have a plethora of platforms to infiltrate, and instigate socio-political unrest.

Official policies, laws and regulatory frameworks will never address the heterogenous assemblage of actors and platforms intent on undermining democracy, for two reasons. One, they have time on their side, and work towards intended outcomes years if not decades into the future using a combination of electoral, political, social and cultural means, over offline and online vectors. Two, the essential naïveté of social media companies, allowing till recently politicians to get away with inciting hate and violence results in, amongst other things, outdated and outmoded oversight, placing at risk communities who are often already marginalised, and have violence directed against them.

Laws and legislation are important, but very unlikely to address root causes and core motivations of growing disinformation concerts. What more can and should be done?

Corresponding with the principles laid out in the Global Network Initiative’s (GNI) framework study on addressing digital harms and protecting human rights, first principles included in the Universal Declaration of Human Rights and the UN Guiding Principles for Business, leading social media companies are embracing a rights-based approach to governance, after years of a more laissez-faire approach. There is a timely, rich and vital discussion that flows from this Silicon Valley pivot for domestic regulators and policy makers. For example, issues like responsibility, responsiveness, proportionality and transparency find renewed focus in regulatory conversations after the violence in Capitol Hill on 6 January 2021. Aotearoa, New Zealand however can and must delve deeper into disinformation’s drivers. How can enclaves of resistance and immunity be crafted?

A good start would be to stop talking about online extremism or social media, and instead study the generation of violence and hate through broader ecological perspectives. Not unlike forestry or agriculture, factors influencing growth, pollination, yield, health and sustainability are invariably connected to context. What nourishes visible out-growth lies beneath what is often studied, or pared. The roots of discontent, often pre-dating online platforms by decades, are significant in the study of online content. Reciprocally, the vector, volume and velocity of digital content influences offline relationships and developments, especially around emotive issues, contested histories, and marginalised communities. Data visualisation, analytics, cognitive neuroscience and emergent research on cognition security are only as useful as securing those who can locate digital data in corporeal lives, recognising that what’s encoded online is the algorithmic representation of complex, fluid, embodied realities. Deconstructing the digital requires the researcher to be rooted in local cultures, which in Aotearoa, New Zealand means the radical reintegration of Maori perspectives in regulatory and policy discourses around disinformation.

This perspective, congruent with my own experience and research including representations of violence and prosocial responses on social media in Sri Lanka and Aotearoa, New Zealand, turns on its head current approaches to countering extremism, largely based on enhanced or increased regulation, legal and legislative means. Recalling the Christchurch Commission Report’s emphasis on social cohesion, we must imagine a more grounded, ecological and inter-disciplinary approach to research and response. Indigenising the inoculation against disinformation gains from harvesting the rich imagination, experience and insights of the Maori in Aotearoa, New Zealand. Through how they (who are us) understand identity, community, society, discourse, remediation and reparation, we can co-construct new socio-political structures that through equitable and democratic offline representation strengthens online responses to injury, incitement or invective. This radical dialogue, based on, amongst other things, active listening, rights, reciprocity and social justice, can be a constant, grounded inquiry that, combined with other disciplines, sets up a comprehensive response to disinformation’s well-springs.

To end a policy brief on the value of offline relationships is perhaps counter-intuitive, but a necessary course-correction to technocratic approaches to a socio-political issue. Doctoral research, comparing social media in Aotearoa, New Zealand and Sri Lanka, supports the view that offline relationships, including political culture and the quality of journalism, significantly (and, at times, predominantly) influence online discourse. Our digital selves imagine a world as it can or should be, while our embodied selves negotiate the world as it is. This friction is essentially violent, and will always be so. Embracing this, enlightened socio-political and technological responses need to imagine stronger, more representative, endogenous and indigenous frameworks against threats to democracy in online and offline fora.

Why? Because He waka eke noa.

Sanjana Hattotuwa
Special Advisor, ICT4Peace Foundation
PhD candidate, National Centre for Peace and Conflict Studies, University of Otago