Lived experience over liked algorithms: The enduring primacy of human agency in peacebuilding

By Dr Sanjana Hattotuwa, Special Advisor, ICT4Peace

Sanjana Hattotuwa was invited by the Social Research Institute at Chulalongkorn University to deliver a presentation on 24 June 2025 on artificial intelligence and peacebuilding during the 3rd UNESCO Global Forum on the Ethics held in Bangkok.

Distinguished colleagues, faculty, students, participants, and invitees. Thank you for this
invitation to speak.

I want to begin with a confession. When I first encountered the work celebrated at the
Kluz Prize for Peacetech which was mentioned in the material sent to me by the
organisers, I felt something I hadn’t experienced in years of studying the role, reach, and
relevance of technology in peacebuilding: a genuine sense of wonder, and optimism.
Here were AI systems that could ostensibly predict conflicts three years before they
erupted. Satellites that can find hidden water reserves from geo-stationary orbit.

Platforms that enabled shared understanding by bringing millions into dialogues for
peacemaking across previously insurmountable barriers of geography, and language.
The Kluz Prize for Peacetech presents examples of the world as it should be, or at least,
how AI can be, and is already used for peacebuilding. And yet, Ukraine, Sudan, Syria,
Congo, Gaza, and Iran, aside from the long shadow of so many other conflict zones,
define the world as it is – to recall the title of the book by Ben Rhodes – where AI is still,
and I argue will be far less useful than it is often presented as.

To believe that AI can somehow magically, urgently, and enduringly prevent, mitigate,
and transform violent conflict – a view shared by many well-intentioned people, and
even institutions – reveals something deeper, and more crucial about our zeitgeist. We
stand at a crossroads between two fundamentally different visions of how peace is built
– and perhaps even how we see or understand what peace really is.

One vision sees conflict as a technical problem benefiting from or awaiting a sufficiently sophisticated or optimised algorithmic solution. In this vision, scaled up computing power, matched with reams of data heralds the end of violence. The other recognises peace as an
irreducibly human achievement: entwined with justice, deeply political, gendered,
grounded, and emerging from the messy, unpredictable, always fluid, culturally-
specific work of relationship transformation.

In my comments today, I want to examine both the genuine innovations emerging from
the intersection of artificial intelligence and peacebuilding, and the dangerous illusions
that threaten to undermine the very foundations of a just, sustainable peace. Both of
these are happening simultaneously. I argue that this isn’t merely an academic
exercise. The choices we make about how to integrate, or resist algorithmic approaches to conflict transformation have inter-generational consequences, and will
shape societies by influencing beliefs, attitudes, perceptions, and responses.

Let me begin with what the optimists get right. The Violence and Impacts Early-Warning
System, or VIEWS, developed by Uppsala University and the Peace Research Institute
Oslo, represents a genuinely remarkable achievement in pattern recognition. By
analysing vast datasets encompassing conflict history, political events, and
socioeconomic indicators, VIEWS can identify potential conflict hotspots up to three
years in advance. This is not trivial. Early warning has long been the holy grail of conflict
prevention. If we can see violence coming, we can do a lot more to prevent it.

Similarly, Lunasonde’s satellite technology offers something that would have seemed
like science fiction just decades ago: the ability to map underground water resources
from orbit. In regions where water scarcity drives conflict, such technology could
theoretically prevent violence by revealing previously unknown resources. The Danish
Refugee Council’s DEEP platform processed humanitarian information for over 7,500
users across 90 countries before its funding was cut. Aerobotics7’s landmine detection
drones achieve 95% accuracy while operating fifty times faster than traditional
methods.

These are not minor achievements. They represent genuine advances in our technical
capacity to gather, process, and act upon information relevant to peace, and conflict.
And yet, when we examine them through the lens of what peacebuilding actually
requires, fundamental limitations become apparent.

I bring to this discussion over twenty-five years of practical experience at the
intersection of technology and peacebuilding. In 2002, I was the chief architect for a
technology stack supporting the peace negotiations in Sri Lanka, using what was then
cutting-edge collaborative software to support conflict transformation. It was the first
of its kind in the world. This experience, combined with decades of work across five
continents has given me a unique vantage point from which to assess both the
promises, and perils of technological approaches to conflict transformation. Recent
writing on the role and relevance of AI in peacebuilding stress a fundamental point:

AI’s emphasis on data, and information doesn’t meaningfully translate to or transform as
the knowledge or experience required for transforming protracted, violent conflict.
Consider the promise of AI in practice. An AI system can process billions of data points
about ethnic tensions in a given region. It can identify patterns that precede violence
with remarkable accuracy. But can it understand why a particular insult, meaningless
to outsiders, carries the weight of centuries of psychosocial humiliation for a specific
community? Can it grasp the inscrutable logics that drive human behaviour in conflict
(or for that matter, romance, and love)? Does it grasp the psychological imperatives
that make people choose death over dishonour, or revenge over prosperity? Can it
understand, and share generational stories which encode longing, belonging, loss,
land, livelihoods, and lives lost?

Data can’t fully capture embodied, and lived experiences, or the accrued cultural knowledge of inter-generational trauma. And any partial capture is a dangerous lie that in AI systems, risks beguilingly convincing presentations. Peace is complicated, unsettled, a process, not an end point. It is corporeal as much as it is conceptual. Much of it – like oral histories, and cultural practices – aren’t encoded in machine readable formats or in the information
repositories that LLMs are based on.

In a recent op-ed published in the New York Times, Dr Molly Worthen’s analysis of
charisma as storytelling that invites followers into a transcendent narrative powerfully
reinforces my argument about the irreplaceable human elements in peacebuilding. Dr
Worthen is a historian at the University of North Carolina, Chapel Hill. Her distinction
between charm (which AI can replicate through programmed social skills) and true
charisma (which emerges from offering people meaningful roles in a larger story)
illuminates precisely why algorithmic approaches fail in conflict transformation.

Nelson Mandela exemplified this distinction perfectly: his “quiet charisma” had nothing to do
with the charm that chatbots can simulate, and everything to do with his ability to invite
an entire nation (both oppressed, and oppressor) into a revolutionary narrative of
reconciliation, and shared humanity. No AI system, however sophisticated its pattern
recognition or natural language processing, can replicate a Mandela-esque “moral
imagination”, that renowned peace scholar John Paul Ledarach said was the capacity
“to recognise turning points and possibilities in order to venture down unknown paths
and create what does not yet exist”.

The problem also runs deeper than mere technical limitations. When we examine who
builds these technologies and what assumptions they embed, troubling patterns
emerge. The vast majority of AI development for peacebuilding occurs in Western
institutions, encoding Western understandings of democracy, governance, and conflict
resolution. This is not a neutral technical choice. It is an exercise of power that shapes
what kinds of peace are seen as possible or desirable.

Consider the concept of “democratic values” that AI systems are meant to promote. Whose democracy? Which values? One has to just look at the horror, and havoc the United States, and allies unleash on the world to recognise the dire perils of AI architectures for peacebuilding defined, built, and sold by those who rely on, profit from, and are part of the military- industry complex, which includes harvesting data from lives that don’t matter to them.

The liberal peace model that dominates Western peacebuilding assumes particular
arrangements of state power, market economics, and individual rights. When these
assumptions are encoded in algorithmic systems and deployed globally, they become
tools of what scholars call “epistemological violence” – the violent destruction of other
ways of knowing and being in the world. This includes grounded, and gendered frames
that completely elude the sexist, misogynyst, and racist bias in so many AI
architectures today.

This is not an abstract concern. In practice, it means that an AI system trained on
Western peace agreements might completely miss the significance of traditional
reconciliation practices. It might optimise for written agreements between official
representatives while ignoring the patient work of rebuilding trust between
communities. It might privilege efficiency over the slow, culturally-specific processes
through which sustainable peace is actually built.

Existing digital divides compound these problems. Digital peacebuilding tools
systematically exclude many of those most affected by conflict. Rural communities,
elderly populations, and those without digital literacy are not peripheral actors in peace
processes. They are often the key stakeholders whose buy-in determines whether
peace endures or collapses. How can, and does AI capture their hopes, and anxieties?

Even more troubling is how easily technologies developed for peace can be turned to
oppression. The case of Xinjiang provides a chilling illustration. Facial recognition
systems that might theoretically help monitor peace agreements have been
weaponised into tools of systematic persecution, complete with “Uyghur alarm”
capabilities. This is not an aberration—it is a predictable consequence of building
powerful surveillance technologies without sufficient attention to how they will be used
in practice, especially by Silicon Valley cultures, and corporations.

But perhaps the deepest challenge to algorithmic peacebuilding lies not in its technical
limitations or potential for abuse, but in its fundamental misunderstanding of what
peace requires. Peace is not a problem to be solved. It is about stories. It is about
relationships – lost, found, torn, and transformed. It is about embodied lives.

This is rendered sharply in the Navajo Nation’s peacemaking practices, which focus not
on punishment but on restoration. When conflicts arise, communities gather not to
determine guilt and innocence through algorithmic assessment, but to repair
relationships and restore harmony. This one example points to elements of
peacebuilding that resist algorithmic capture. Trust, for instance, emerges from
countless small interactions: a kept promise, a shared meal, brewing tea, a hug, going
to another’s home, a moment of unexpected kindness. Research even shows that
physical touch alone can reduce violence between individuals. How do we encode the
tactile, and restorative justice in a large learning model?

The danger is not that AI will fail to support peacebuilding, but that it will succeed very
well at prioritising the wrong things. By making certain aspects of peacebuilding more
efficient – around data gathering, pattern recognition, communication for example – we
risk obscuring the elements that actually matter. We create what looks like progress
while missing the deeper work of relationship transformation.

This brings us to a crucial question, including of this conference: if algorithmic
approaches or AI modelling around peacebuilding remain so inherently limited, should
we abandon them entirely? Obviously, the answer is no, but only if we fundamentally
reconceptualise their role beyond what’s presented today as leading examples. AI can,
and should augment human agency, and capacity for the transformation of violent
conflict, but can never replace situated knowledge, experience, embodied forms,
rituals, and conceptual frameworks of justice, and peace rooted in specific cultures.
What would this look like in practice?

First, it requires what I call “technological humility”: recognising that our most
sophisticated AI systems are tools, not solutions. They can help us see trends or
patterns we might miss, process information at scales beyond human capacity, and
facilitate communication across barriers. But decisions around trust, risk taking,
balancing competing claims for justice, and related areas resist algorithmic
representation or replication.

Second, AI for peacebuilding demands a decolonial approach to peace technology
development. Instead of building tools in Silicon Valley or Geneva for use in South
Sudan or Sri Lanka, we must begin with the communities affected by conflict. There can
be no AI for peace without those impacted by violence involved in its design,
development, and deployment. What are their concepts of peace? Who are the
architects? How do they want to get to peaceful end states? What are their
mechanisms for building trust, and resolving disputes? How is truth constructed, and
by whom? How can technology support rather than supplant existing capacities, and
simultaneously acknowledge shortcomings, and myopic worldviews? How these
questions are answered, and by whom matters.

Third, it requires constant vigilance against the militarisation, and weaponisation of
peace technologies. Every tool we create for building peace can be turned to purposes
of surveillance, control, and oppression. This is not a bug to be fixed but a fundamental
characteristic of dual-use technologies that must shape how we design, deploy, and
govern these systems.

Let me be clear: neither am I advocating for a retreat into technological pessimism nor
nostalgic for a fictional past where without technology, peacebuilding was somehow
more authentic. AI is here, and it will only make greater inroads into peacebuilding
praxis, and theory. The challenges facing humanity around climate change, forced
migration, resource scarcity, and related issues occur with a frequency, complexity,
and scale that demand technological assistance to capture, clarify, and meanginfully
respond. However, we must strongly resist the seductive myth, often sold to the Global
Majority by those with the least experience in violent conflict, that peace is primarily a
technical problem awaiting a technical solution.

The most profound insight from a critical examination of both the promises, and perils
of AI in peacebuilding is this: sustainable peace emerges not from algorithmic
optimisation, but from courage, conviction, and creativity. The role of political will,
linked to, as I mentioned earlier, charismatic, principled leadership remain
fundamental. In fact, a multipolar world defined by volatility, uncertainty, complexity,
and ambiguity requires our political leaders to be even more empathetic, and
principled. The clearest evidence of violent conflict’s threats, even when provided by AI,
is useless without those with, and in power making the decisions to avoid death, and
destruction.

In short, to understand AI’s severe limits is, counter-intuitively, to be better positioned
to appreciate where, when, and how it can best help.

Thank you very much.

Please find the text in Pdf here.