Pretoria Consultations on the need for, and feasibility of, an International AI Bill of Human Rights

afp pta ai bill of hrb

By Dr Yohannes Eneyew Ayalew, Postdoctoral Fellow at #3GDR Project

On Tuesday 15 July 2025, the fourth and final in a series of closed expert consultations organised by the Accelerator Fellowship Programme took place in Pretoria. The consultation was co-sponsored by the Centre for Human Rights, Faculty of Law, University of Pretoria and the Global Center on AI Governance. From Oxford to Geneva, Harvard to Pretoria, four corners of thought were united by a single, ambitious aim: the call for an International AI Bill of Rights.

After welcome remarks and introductions by representatives of the organising institutions, the programme began with a presentation by Professor Yuval Shany, followed by a closed consultation conducted under the Chatham House Rule. Professor Shany explored the application of international human rights law (IHRL) to AI systems, drawing on national initiatives such as the White House’s 2022 Blueprint for an AI Bill of Rights (now defunct), as well as international developments shaped by the Council of Europe (CoE), the European Union (EU), and the African Union (AU). He contrasted the CoE’s Framework Convention on AI, Human Rights, Rule of Law, and Democracy, which he described as a general framework convention, with the EU’s AI Act, which provides detailed guidance on AI-related risks. While early regional instruments in Europe and Africa were primarily concerned with protecting data subjects, initiatives such as the CoE’s CoE’s Convention 108+ and the AU’s Malabo Convention also sought to protect the right not to be subjected to automated decision-making, a formulation of the broader right to a human decision.

Following the opening presentation, participants raised a range of thoughtful questions and proposals concerning the potential need for a new AI Bill of Rights at the international level. One expert highlighted the erosion of human agency, particularly regarding freedom of expression, opinion, and thought, as AI technologies increasingly encroach on cognitive liberty and mental autonomy. It was emphasised that the human rights implications of AI extend far beyond individual rights, with significant implications for socio-economic and collective rights. This is particularly pertinent in regions such as Africa, where human rights are conceptualised not only in individual terms but also through a communal lens.

A recurring theme throughout the consultation was the role of non-state actors, especially major technology companies. Participants underscored the importance of ensuring their accountability in any future AI rights framework. In this context, several contributors pointed to Articles 27 and 28 of the African Charter on Human and Peoples’ Rights, which articulate a cluster of duties that could provide a basis for holding technology companies accountable as part of a broader human rights regime.

There was also considerable discussion on the appropriate level and nature of regulatory intervention required to safeguard human rights in the AI context. While some favoured a sector-specific approach, others cautioned against fragmented regulations that could produce normative inconsistencies, advocating instead for a unified global framework. Some participants even proposed a binding international instrument, whether a regulation, protocol, or treaty, arguing for a rebalancing of priorities from innovation to human rights. Others, however, expressed scepticism regarding the feasibility of such a treaty, citing longstanding political resistance from states and the UN. As a more pragmatic alternative, several contributors proposed the adoption of ‘soft law’ tools, such as interpretative declarations, resolutions, or guidelines, which, while not legally binding, could nonetheless wield moral and political influence.

Other voices in the room challenged the relevance of international legal frameworks altogether, given what they described as a deepening crisis in the international system. They questioned whether the IHRL approach to AI is currently viable, citing the weakening of democratic institutions and multilateral cooperation worldwide, as well as what some termed an ‘unholy marriage’ between tech billionaires and states. In their view, IHRL alone cannot provide a comprehensive solution to the challenges posed by AI.

On the substance of AI-related human rights, many participants stressed the importance of grounding the conversation in African philosophies, social contexts, and cultural understandings of human dignity. Concepts such as peoples’ rights and communal duties, core principles within African human rights traditions, were highlighted as essential components of any future framework. Suggested rights included:

  1. The right of access to AI technologies
  2. The right to control data used by AI systems
  3. The right to algorithmic fairness and protection from bias
  4. The right to human decision-making and interaction
  5. The right to algorithmic transparency and explanation

While some questioned whether these rights represented new protections, arguing that similar principles already exist in existing international instruments, others emphasised their unique relevance within African contexts. It was suggested that formal recognition of such rights, possibly including also a new right to protect mental integrity from manipulation by AI systems, could be a crucial step towards more inclusive and effective safeguards. These rights were described as essential tools for protecting human dignity, agency, and data justice, ensuring that technological progress does not come at the cost of fundamental values.

The day concluded with an open panel discussion titled Do We Need a New AI Bill of Human Rights? A Regional Perspective. Moderated by Dr Fola Adeleke, the panel featured Professor Yuval Shany, Professor Emma Ruttkamp-Bloem, and Professor Jake Effoduh. The panellists considered whether existing international human rights treaties are equipped to address the governance challenges posed by AI. While approaching the issue from different perspectives, all three underscored the importance of power, epistemology, human interaction, and ethics, not only as means of preventing rights violations but also as foundations for the legitimacy of human rights frameworks. These principles, they argued, should guide the design, regulation, and deployment of AI technologies, ensuring that meaningful safeguards are embedded in practice.

Across the day’s discussions, there was broad agreement on the urgency of global AI governance that centres human rights, before entrenched technological and geopolitical power structures become even harder to shift.