Sanjana Hattotuwa, Special Advisor at the ICT4Peace Foundation, was invited to speak at a workshop series hosted by Oak Foundation and Children’s Investment Fund Foundation (CIFF), and convened by Reos Partners. Presenting on the future threat landscape of disinformation, Sanjana joined First Draft’s Claire Wardle and Sarah Oh in laying out and speaking to the disinformation and misinformation landscape globally. A select group of over 60 of the world’s leading institutions, academics, thought-leaders, funders and platforms dealing with information disorders and resulting online and offline harms were present at the inaugural session, conducted on 6 December 20201.

Hattotuwa’s presentation started with problematising the democratisation of voice and the associated amplification of hate by extending the reach of harms. Noting that democratisation of access was desirable and imperative, Hattotuwa complicated the normative assumption that expanding the surfaces that produced and promoted voices would invariably, or overwhelmingly, strengthen liberal democracy.

Moving from this, Hattotuwa said that the splintering of the public sphere, moving away from the observable to micro-spheres of influence, or many publics, posed a challenge to the study of, and timely response to, disinformation. He asked what disinformation would look like in the metaverse, drawing on writing looking at Second Life’s potential for peacebuilding from over 15 years ago, and what at the time were reports of exploding pigs and riots on the platform after a French Extreme Right Political Party opened an office on the platform. What, he asked, could be the consequences of Meta’s Metaverse, and other metaverses, on disinformation’s seed and spread?

He followed this up by problematising challenges around the research into disinformation when all platforms, in addition to splintering, were also encrypted. He questioned what for example Meta/Facebook’s completion of its ‘privacy pivot‘, alongside the company’s growing hostility towards independent researchers (critical of its policies and practices) and larger industry issues around research collaboration & access, would mean for governments and civil society interested in curtailing harms and offline consequences of digital deleterious.

Drawing on doctoral research, he questioned the degree to which civil society, governments and institutions interested in, and involved with responses to disinformation and information disorders understand the complicated nature of social media platforms, which simultaneously produce, present and propagate content helping democracy, and harming it. This narrative, he noted, was more complex than the presentation of social media platforms as evil incarnate, and only contributing to significant harms. Understanding this, Hattotuwa noted, was key to harnessing the potential, however limited and episodic, of platforms that amplified the worst of humanity, to also harness our better angels.

Hattotuwa noted that a migration to email, SMS, websites, terrestrial radio, TV, newsprint, and word of mouth vectors presented as much a problem to the spread of disinformation as social media platforms. This point was connected with the need to study disinformation through contextual, grounded lenses, that more fully appreciated how entrepreneurs of discord used old media platforms, and existing media architectures, as the scaffolding for the instrumentalisation of anger, anxiety and long-standing socio-political division. Hattotuwa noted that this was a classic ‘wicked problem‘, and would get far more complex in the future, from the Global South outwards.

Speaking to Hattotuwa’s post-doctoral research and work in Aotearoa New Zealand, he presented disinformation ecologies as a viable framework for the study of the phenomenon, noting that in the future, AI and algorithmic underpinnings amplifying harms would intersect with offline cultures, communal relations, political traditions, media landscapes, and societal beliefs, to create complicated disinformation structures – resisting formulaic, funder favourite log-frame and Global North inspired approaches to study, respond to or meaningfully contain. Disinformation, Hattotuwa stressed, was a socio-technological and socio-political landscape, that couldn’t be addressed by technocratic approaches.

To this end, and ending his presentation, Hattotuwa noted that the endemicity of disinformation would require all of society and all of government approaches to tackle the issue, inspired by the Christchurch Commission Report’s stress on social cohesion as a recommendation with enduring and significant validity globally. In a future where we would have to live with, and countenance, far more complex and corrosive disinformation, Hattotuwa noted that it would only be through the meaningful strengthening of socio-political relationships, including through first-principles and democratic governance, that offline consequences of digital harms could best be managed.