The Paris AI Summit: The cleavages over global governance made apparent

While the US and UK failed to endorse the Summit’s final statement, China did, no doubt with an eye to the future and playing a leading role on AI governance.

By Amb. Paul Meyer, Senior Advisor, ICT4Peace Foundation and Adjunct Professor of International Studies at Simon Fraser University.

It was always an ambitious project of the French President to convene an “AI Action Summit” in Paris (February 10-11, 2025) bringing together participants from over 100 countries including government, private sector and civil society representatives. The selected themes of the meeting: i) Public interest, ii) Future of Work, iii) Innovation and Culture, iv) Trust in AI and v) Global AI Governance would each have merited an international conference of their own. Given the state of the world however many eyes were on the global governance topic – could Paris begin to harmonize and rationalize the proliferation of competing AI governance declarations, compacts and standards? Alas, on the basis of its chief outcome document “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet” it would seem that we still have a long way to go before the international community can rally behind a common framework for AI governance.

The goals of the summit to ensure that AI “is open, inclusive, transparent, ethical, safe, secure and trustworthy” seem both commendable and conducive to achieving common positions, but the understanding of these concepts remain disparate and often opaque. The statement, beyond its recognition of “the need for inclusive multistakeholder dialogues and cooperation on AI governance,” provided little guidance on how and where these processes would be undertaken.

Most troubling was the absence among the 60 states endorsing the statement of two leading AI powers – the United States and the United Kingdom. Whereas the US rejection reflected the hostile attitude of the Trump Administration to greater regulation of AI, the UK’s absenteeism was harder to explain given their previous championing of further international coordination on AI standards as displayed at the 2023 Bletchley Park AI Safety Summit. The speech by US Vice President, J.D. Vance made clear American concerns over excessive regulation of AI which could “kill a transformative industry just as it’s taking off”. The source of British reticence was less evident. A UK government spokesman, in explaining their failure to sign on, stated “We felt the declaration didn’t provide enough practical clarity on global governance nor sufficiently address harder questions around national security and the challenge AI poses to it”. One can’t but wonder if a desire to sustain the “special relationship” with the US was more of a factor than the putative deficiencies of the text.

The defection of the US and the UK was thrown into high relief by the fact that China did sign on to the statement, suggesting that it saw an opportunity to assert leadership on the evolution of AI governance in the future. The Global South was well represented among the signatories of the statement, although certain significant countries, primarily from the Middle East, were absent (e.g. Türkiye, Pakistan, Israel, Egypt, Iran, Jordan, and Saudi Arabia).

In his remarks the UN Secretary General, Antonio Guterres tried to steer attention back onto the Global Digital Compact which was adopted last September at the UN’s “Summit of the Future”. Although receiving universal endorsement the Compact’s main action items were to call for a “Global Dialogue on AI Governance” within the UN and the establishment of an Independent International Scientific Panel on AI. While both of these outcomes are redolent of a “let’s create a royal commission to study the problem” approach they do sustain a shaky consensus over the need to address the global AI governance deficit before it becomes even more fragmented. In the words of the Secretary General the world needs to come together “around a shared vision: One where technology serves humanity, not the other way around”.

The dangers inherent in the failure of the international community to agree on some common standards regarding AI were outlined in a brief statement issued by the Action Summit’s working group on AI Governance (co-chaired by France and India) which had engaged 29 states, 6 international organizations, 7 tech companies and 10 civil society organizations and had met over seven months leading up to the Paris meeting. The co-chairs statement called for “a collective and concerted response” to the opportunities and risks engendered by AI , if for nothing else in order “to prevent a frenetic race from turning into a ‘race to the bottom’ and losing sight of the fundamental requirements of safety and respect for human dignity”.

As if in anticipation of the anti-regulation stance of the new US Administration, the working group made clear from the outset that “governance” did not systematically mean regulation, but included a wide range of modalities for action: codes of conduct, voluntary commitments, sharing of best practices, open standards, etc. While this effort to ‘sugar coat’ what some view as the bitter pill of state-led governance, was not sufficient to bring onboard the skeptics it does suggest that a variety of actions may be required to gain acceptance for “global guardrails” on AI applications.

In the event the working group had to acknowledge that its exchanges “on the question of the interoperability of standards and public policies” remained “inconclusive”, although the co-chairs believed that the “design of an AI governance system” was justified. Such a design will not be readily forthcoming, especially with the current proliferation of diverse and contending AI governance frameworks and associated declarations on AI safety and trustworthiness. The move to codify some of these ideas as per the Council of Europe’s treaty on “Artificial Intelligence, Human Rights, Democracy, and the Rule of Law” may yield more binding commitments, but could also repel those who believe it was a product of a closed shop negotiation and that they were not given an opportunity to influence the outcome.

Canada had a relatively high profile at the AI Summit with the Prime Minister participating in person. His remarks avoided the policy challenges of the governance issue, rather concentrating on Canada’s attractiveness as a partner for business in AI innovation. He also noted Canada’s G7 presidency and his intention “to demonstrate leadership in advancing security, prosperity and partnerships…”. While Canada had no problem in signing on to the Summit Statement, it remains to be seen how much attention Canada will devote during its G7 Presidency to the important, but tricky issues of defining global governance standards. The potential changes in government over the course of this year may also impede the development of Canadian positions and the conduct of an active diplomacy on AI.

The Paris AI Action Summit was a bold initiative by France and one that has stimulated considerable thought and engagement by a variety of stakeholders. It has been held at a particularly turbulent time in international relations, which inevitably worked against its aim of “a shared vision for humanity”. It will require a sustained investment in multilateral (and multi-stakeholder) dialogue and diplomacy if some semblance of AI governance coherence is to emerge.

Please find the text in Pdf here.

And please find a compilation of Papers, Op-ed, lectures etc. by ICT4Peace on AI, Autonomous Systems, LAWS and Peace Time Threats since 2018 here.

This text appeared first in www.opencanada.org