Is Social Media Driving Instability?

By Sanjana Hattotuwa. Posted on 12 October 2020 on the New Zealand Classification Office official website.

What could I do to destabilise New Zealand? Quite a bit, as it turns out, given a few years and the unprecedented volatility brought about by the Coronavirus pandemic.

This question is far removed from personal desire or an easy endeavour with a guaranteed outcome. I ran it by the Classification Office during a visit as a thought-experiment based on on-going research looking at social media’s role in helping spread anxiety, violence, and hate. What could be key drivers of instability? What could content aimed at stoking fear or anxiety look like? How could violence be encouraged, first digitally and then, justified physically? Could anyone find out what made Kiwis anxious, or kindled their interest the most, daily? What are the implications of this data when studied in the aggregate?

Sanjana with the Chief Censor, David Shanks

My work at the National Centre for Peace and Conflict Studies at the University of Otago seeks to answer some of these complex questions. I arrived in New Zealand after nearly two decades of advocacy and activism involving some form of internet or web-based technology in Sri Lanka. Before around 2012, the likes of Facebook and Twitter helped protect and promote content at risk of censorship or violent suppression. Over the past decade, however, social media platforms increasingly helped to seed and spread toxicity, hate and violence. What went wrong? There are many answers, but all revolve around the fact that leading social media companies in Silicon Valley simplistically assumed connecting people was a net good and democratic gain. They were wrong.

Research as far back as 2013 provided some of the first evidence globally of how Facebook was used by violent extremists to fuel Islamophobia in Sri Lanka. But it wasn’t until March 2018, after the country’s worst anti-Muslim riots in decades that Facebook was compelled to meaningfully investigate the role its products and platforms played in spreading the violence. Long overdue measures around non-recurrence were undertaken, but it was too little, too late. A toxic genie was out of the bottle.

Though thousands of kilometres away, tragic lessons from Sri Lanka matter to New Zealand. It turns out that while social media is engineered to encourage the sharing of what we feel, fear, desire or do, what we end up posting – over time and also in real time – is often used to seed doubt, sow anxiety and spread anger. How and over which platforms this is done changes, as does the effectiveness of various types of content. However, after the pandemic – globally as well as in New Zealand – there is significant anxiety about job security, unemployment, the economy, health, travel and the future. Sophisticated domestic and international actors and political entrepreneurs seek to amplify these concerns for selfish benefit or partisan gain. While clear to many policymakers and academics, the public struggle sometimes to understand the magnitude of the risk.

Imagine a day-care centre for children with just two staff named, purely for illustrative purposes, Facebook and YouTube. With a few children, all from the same neighbourhood and in a large room, Facebook and YouTube manage their roles just fine. They tend to the needs of the children, look out for deviant behaviour, help those in distress, watch out for risks and maintain healthy interactions amongst kids not too different from each other. There’s little to no violence, and if there is an outburst, it is quickly addressed. Now imagine this day-care centre, over a decade, growing to fill a skyscraper. Children of varying ages and backgrounds fill each floor. However, it’s still Facebook and YouTube managing all of them. Without any oversight, the children run wild. Anything goes, and without correction or guidance, bad behaviour has no consequence or even an established template for the achievement of certain ends. Even with the kids they can see, Facebook and YouTube are completely overwhelmed by competing needs. Without adequate resources and support, things quickly disintegrate into total chaos.

No day-care centre with such nightmarish under-staffing would be allowed to operate ethically or legally. And yet, this is not unlike the management and operations of leading social media companies, with billions of users. As with many other countries, New Zealand’s national conversation is increasingly mediated through social media platforms or products, governed by companies struggling to deal with toxicity, hate and violence. This complex ecology is ripe for abuse and weaponisation, in ways tried and tested elsewhere. The danger, to my mind, lies in an exceptionalism that sees New Zealand as mostly immune to democratic decay through the slow but steady drip of toxicity over web, internet and social media.

I believe this country offers many lessons in progressive policymaking and the regulation of social media. Undoubtedly, it will be a challenging and lengthy process. With the pandemic response, we know how careful study, contact tracing and evidence-based analysis help locate, in a timely manner, sources of infection and super-spreaders, helping the strategic containment of a deadly virus. The same principles can be applied to social media content around hate and harm. New Zealand is well-positioned to lead this process. Instruments like the Christchurch Call highlight how social media platforms can no longer wish away the harm they feature and, often, amplify. Sober responses to emotional, divisive issues in a media landscape that is continually evolving are hard to imagine, but essential to craft. My research contributes to what many others, in New Zealand and elsewhere, are doing to strengthen our better angels.

Ma tini ma mano ka rapa te whai.

Sanjana Hattotuwa is a PhD candidate at the University of Otago and Special Advisor at the ICT4Peace Foundation. His views do not necessarily represent those of the Chief Censor or of the Classification Office.