HUDERIA: New tool to assess the impact of AI systems on human rights

ICT4Peace welcomes the new Council of Europe tool that provides guidance to carry out risk and impact assessments for Artificial Intelligence (AI) systems. We feel stakeholders need not only high-level principles for ethical and secure use of AI, but more importantly practical guidelines and methodologies to asses the potential impact of AI applications on human rights. That is why ICT4Peace has developed, for instance, a Toolkit on “Artificial Intelligence Algorithmic Bias and Discrimination” for for Cybersecurity Services Companies. 

“The HUDERIA Methodology is specifically tailored to protect and promote human rights, democracy and the rule of law. It can be used by both public and private actors to help identify and address risks and impacts to human rights, democracy and the rule of law throughout the lifecycle of AI systems.

The methodology provides for the creation of a risk mitigation plan to minimise or eliminate the identified risks, protecting the public from potential harm. If an AI system used in hiring, for example, is found to be biased against certain demographic groups, the mitigation plan might involve adjusting the algorithm or implementing human oversight.

The methodology requires regular reassessments to ensure that the AI system continues operating safely and ethically as the context and technology evolve. This approach ensures that the public is protected from emerging risks throughout the AI system’s life cycle.

The HUDERIA Methodology was adopted by the Council of Europe’s Committee on Artificial Intelligence (CAI) at its 12th plenary meeting, held in Strasbourg on 26-28 November. It will be complemented in 2025 by the HUDERIA Model, which will provide supporting materials and resources, including flexible tools and scalable recommendations.”

The release of this tool follows the approval of The Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law, which is now open for signature on 5 September 2024. See here the text of  the Framework Convention and its Explanatory Report.

“Relationship to the Framework Convention

The HUDERIA is a stand-alone, non-legally binding guidance that, as such, does not have
legal effect. It is not mandatory, nor intended as an interpretive aid for the Council of Europe
Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule
of Law, hereinafter referred to as “the Framework Convention”. Many existing or future
frameworks, policies, guidance, standards or tools may be used to assist in conducting AI risk and impact management, including the HUDERIA.

Parties to the Framework Convention have the flexibility to use or adapt the guidance, in whole or in part, to develop new approaches to risk assessment or to use or adapt existing
approaches in keeping with their applicable laws, provided that Parties fully meet their
obligations under the Framework Convention, including, in particular, the baseline for risk and impact management set out in its Chapter V.”

ICT4Peace has been working on the potential impact of AI on Society, Democracy and human security since many years. Please find here some references on our work:

ETHIC AI AND POLITICAL PER SPECTIVES ON EMERGING DIGITAL TECHNOLOGIES

HIGH-LEVEL PANEL ON DIGITAL COOPERATION REFLECTIONS AND RECOMMENDATIONS FROM THE ICT4PEACE FOUNDATION (2019)

Digital Human Security 2020: Human security in the age of AI: Securing and empowering individuals (2018)

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats (2018)