In November 2024 ICT4Peace launched the groundbreaking Toolkit: From Boots on the Ground to Bytes in Cyberspace prepared by Anne-Marie Buzatu, Executive Director, ICT4Peace, in cooperation with The International Code of Conduct Association – ICoCA and the support of the Swiss Federal Department for Foreign Affairs.
See the full set of the Toolkit here.
This toolkit provides a comprehensive guide for IT and Private Security Companies (PSCs) on navigating the complex landscape of Information and Communication Technologies (ICTs) and their impact on human rights. The toolkit is designed for a wide range of private sector stakeholders, including security professionals, management, human rights officers, compliance teams, technology teams, and government and civil society groups.
On the occasion of the AI Action Summit and the Paris Peace Forum, ICT4Peace issued a Call on Private IT and Cybersecurity Companies (PCS) to use toolkits that help identify, mitigate, and prevent algorithmic bias and discrimination in AI-driven systems such as the one developed by ICT4Peace:
Tool 7: Artificial Intelligence Algorithmic Bias and Discrimination
This tool offers practical guidance on responsible AI implementation, ensuring fairness, transparency, and compliance with global ethical standards and regulations.
Key Focus Areas of the toolkit are:
- Recognizing Bias – Assess its impact on IT companies, including ethical and legal risks.
- Detection & Auditing – Use systematic evaluations to identify bias in AI systems.
- Bias Mitigation – Apply technical and organizational strategies to ensure fairness.
- Human Oversight– Implement accountability measures for AI decision-making.
- Regulatory Compliance – Align AI practices with international laws and ethical frameworks.
- Training & Awareness – Equip teams with knowledge and tools to detect and address bias.
- Continuous Monitoring – Use auditing frameworks to track and reduce bias over time.
- Future-Proofing AI– Stay ahead of emerging AI risks, regulations, and best practices.
By adopting these principles, private IT companies can strengthen trust, enhance operational integrity, and ensure ethical AI deployment while minimizing legal and reputational risks.