Consultations in Geneva on the need for, and feasibility of, an International AI Bill of Human Rights

afp geneva ai bill of human rights 1400 x 767 px

By Professor Yuval Shany, Accelerator Fellowship Programme.

On Thursday, 15 May, we held in Geneva the second in a series of closed academic consultations organised by the Accelerator Fellowship Programme. Whereas the first event took place in March in Oxford and was co-sponsored by the Bonavero Institute of Human Rights, the second event was co-sponsored by the Geneva Academy of International Humanitarian Law and Human Rights, which also hosted the event. The participants in the consultation session included academics, UN officials, members of leading NGOs and legal practitioners.

Like the Oxford event, the consultations started with introductory remarks in which I explained the academic goal of the project – the preparation of a White Paper (policy research paper) that will evaluate the need for, feasibility and potential contents of an International AI Bill of Human Rights. I noted that the project is premised on the observation that existing documents on field AI and human rights are sometimes not well couched in international human rights law, as they are often too general in their language or too specific in their legal requirements. Against this background, the introduction of a non-binding document that would clearly set out the manner by which existing and emerging human rights apply to AI systems could provide states, international bodies, technology companies, civil society and other stakeholders useful guidance, even though the current likelihood of reaching broad international consensus on such matters appear to be small. At the same time, there are also serious concerns regarding the need to avoid inflation in standard-setting, including in identifying new human rights. 

The consultations proceeded thereafter under Chatham House Rules. The first phase related to the utility of the project. Some participants expressed the view that existing human rights laws have the potential to provide sufficient normative guidance, but that interpreting them through litigation or legal opinions is a time-consuming effort. A “middle level” guidance document that offers specific human rights modalities for interpretation can be therefore useful. One participant expressed, however, concern about the democratic legitimacy implications of delegating to experts the task of interpreting human rights for AI, as opposed to deferring to political law-making processes and the democratic deliberations such processes can include. Another participant expressed frustration with the difficulty associated with translating existing human rights standards into technical standards. The shortcoming of the current state of human rights creates a risk that much of the technical standard-setting in the field will be undertaken without a meaningful human rights component. A few participants mentioned the need to ensure effective human rights risk assessment and risk mitigation. They considered the utility, in this regard, of generating more information on the possible human rights implications of AI systems. It was also commented by one participant that some states have capacity problems, which prevent them from effectively monitoring the compliance of AI companies with human rights standards.

The next stage of the consultations revolved around the following question: how unique is the human rights challenge posed by new and emerging AI technologies – that is, are the harms produced by AI systems different in kind than those produced by other causes? One participant opined that some harms are similar – albeit harder to detect (e.g., algorithmic bias and breach of data privacy) – yet other harms are different in nature: most significantly, the inability of humans to understand AI outcomes, which produces new epistemic harms, and the inabilities of machines to relate to human beings at a human level, which adversely affects the human basis of important social relationship. Another participant noted the concern about human governance as unique to AI systems. Such new harms and concerns may justify the emergence of new human rights. In that connection, yet another participant commented that whereas there should be a presumption in favour of relying on existing rights – due to fears of rights inflation – that presumption should be overcome in suitable occasions. This is partly because over-extending existing rights also has practical and theoretical costs. Another  participant identified the central role of companies in the realm of AI as adding another layer of complexity under existing law.

The consultations then moved on to discuss specific aspects of any possible AI bill of human rights and its possible prospects of introduction. A few participants expressed the concern that compiling a closed list of human rights that will include rights that are particularly relevant for AI systems might send the wrong message, according to which other human rights are not applicable. At the same time, one participant criticized existing documents for tending to exclude military AI from the scope of protections afforded and welcomed a document that would also apply to military systems. As for chances of normative progress, one participant noted the problem of corporate capture as a possible reason for the limited international appetite to introduce new rights. Another participant took the view that the reluctance to introduce new AI human rights may be linked, in the EU context, to a perception that AI regulation must be technology neutral. Hence, a risk-based approach which does not tackle the technology itself but its impacts was deemed preferable. It was also noted in this regard that there are already too many international resolutions on AI and human rights that do not come together. Hence, it would be useful to identify UN entities with coordinative role, like the new office for digital and emerging technologies and interest them in this coherence-enhancing initiative.

Finally, the participants discussed some of the ways in which AI technologies can be harnessed to support the monitoring by international bodies of compliance with human rights obligations. One participant provided information on the existing data analytics activities of the OHCHR and on privacy safeguards introduced in this connection. Another participant spoke about the use of AI in predictive analysis around humanitarian catastrophes and situations of mass displacement. It was noted be several participants that AI systems could help streamline many of the human rights procedures before international bodies.  

The next consultations will be held in Harvard and Pretoria, in June and July, respectively.