Harvard Consultations on the need for, and feasibility of, an International AI Bill of Human Rights

AFP Harvard Bill of Human Rights Group

By Professor Yuval Shany, Accelerator Fellowship Programme.

On Tuesday, 10 June, we held at Harvard the third in a series of closed expert consultations organized by the Accelerator Fellowship Programme. The first consultation took place in March in Oxford and was co-sponsored by the Bonavero Institute of Human Rights, and the second consultation in May in Geneva was co-sponsored by the Geneva Academy of International Humanitarian Law and Human Rights. This time, the consultation meeting took place at Harvard Law School, and was co-sponsored by the Human Rights Program there (directed by Professor Gerald Neuman).

The day’s program started with an open panel (the recording of which will become available soon online) on Artificial Intelligence - Promises and Perils for Humans' Rights, with Professor Nueman moderating a conversation between Professor Ifeoma Ajunwa, Professor Julie Cohen, Professor Larry Helfer and myself. Although the speakers approached the topic from a variety of angles, emphasising, in turn, dignitiarian and economic considerations, the political economy and network envirionment in which AI systems operates, democratic legitimacy concerns and the possible use of AI to strengthen human rights mechanisms, there was a general agreement that a regulatory response to the societal and cross-soceital disrutption associated with AI technology and the private companies that drive technological development is urgently needed. What’s more, such a response must be anchored in human rights principles.

The closed consultation which followed (conducted under Chatham House Rules) revolved around these and other themes. One participant opined that the main concern, from a human rights perspective, is the dehumanization associated with algorithmic reasoning and problem solving. While it would be useful to provide more guidance on how basic human rights principles, such as human dignity, apply to the use of AI systems, it would be difficult to tackle specific human rights problems without addressing the broader context in which AI systems are developed and deployed. Another participant similarly took the view that regulatory interventions should aim to address the sources of the problem – for example, the economic structures that facilitate the over-empowerment of tech companies and their ensuing ability to incorproate policy preferences into the technology itself.   

Several participants considered the appropriate level of regulatory intervention with a view to enhancing the human rights protections it affords. While some recommended a sectoral approach, others noted that isolated sectoral regulation may result in normative clashes; hence a global approach may be preferable. It was also asserted that regulation might need to be incremental – starting with use of AI by governments or with relatively non-controversial issues (such as deepfakes); still, others noted that the temporal window for introducing meaningful human rights standards before the technology and the political and economic structures surrounding it become entrenched and resistant to change may be very narrow. Moving fast is therefore of the essence. 

The obstacles on the path of developing new norms in the field of AI and human rights were aslo discussed. Some participants highlighted the link between AI and national security, which is obstructing regulatory responses, while others referred to the current anti-regulatory political climate in the US and elsehwere, and to the limited interest of the companies themselves in effective human rights regulations. It was therefore suggested by one speaker that a human rights instrument (such as an AI bill of human rights) would play a mostly expressive role. On the other hand, one participant opined that conterveilling forces – civil society, some groups of like minded countries, and human rights institutions – could utilize new human rights standards to try to rein in excesses by big tech compnaies and develop bottom-up responses to them. A multi-stage approach, articulating standards, translating them to different sectors, and providing expert guidance, may also assist those actors willing and able to advance the AI and human rights discourse. And human rights have to be considered against other “stacks” of relevant regulation. Other participants also spoke in favor of resorting to flexible principles – which may be considered less demanding than other forms of regulation– to be complemented later on by more specific documents (or “living annexes”) when conditions are ripe for that.

On the substance of AI-related human rights, it was commented by one speaker that the complex and nuanced nature of many normative problems associated with AI systems – such as lack of transparency and sourcing training data – could complicate the ability of human rights to directly apply to them. Still, others conveyed an expectation that the human rights discourse should proceed beyond privacy and bias issues to cover other concrete challenges such as human impersonation, artificial data, reparations and group rights. It was also remarked that some existing initiatives regarding AI and individual rights – including the 2022 White House Blueprint for an AI Bill of Rights – went, on the one hand, beyond existing law, but, on the other hand, did not go beyond what’s acceptable to the AI industry. In any event, one participant remarked that new rights must be premised on actual risk assessments.    

A few participants commented, in the concluding part of the consultation, that a document that would deal with the criticisms against developing specific human rights standards for the use of AI system would be a valuable addition to the existing discourse in the field, especially if it engages also with contextual factors and risks. One speaker recommended that such a document should identify or call to identify broad human rights principles – without engaging directly with institutional factors. It was commented, in this context, that even aspirational documents can have an empowering effect for users of AI technology.

The next consultations will be held in Pretoria in July, 2025.