Aligning AI for Human Flourishing: the roundtable discussion around responsible AI

house of lord hero image 2000x1333

Written by Imogen Rivers, Dhil Scholar at the Institute for Ethics in AI 

On 10 July, the Accelerator Fellowship Programme of the Institute for Ethics in AI, University of Oxford, hosted an expert roundtable on the topic of ‘Aligning AI for Human Flourishing’ at the House of Lords in London.  

The event opened a two-day series taking place in London and Oxford to celebrate the Accelerator Fellowship Programme spearheaded by the Institute’s Director of Research, Dr Caroline Green.  

Bringing together leading voices from academia, tech, industry, policy, and civil society, the roundtable explored the current state of research and practice in AI alignment.  

Three key questions were explored by the experts: 

  1. What technical challenges does AI alignment create? 
  2. What does AI alignment mean for businesses? 
  3. What are the policy and legal implications of AI alignment? 

Baroness Beeban Kidron OBE, advisor to the Institute for Ethics in AI, Commissioner on the UN Broadband Commission for Sustainable Development and expert advisor for the UN Secretary-General’s High-Level Advisory Body on Artificial Intelligence, inaugurated the discussion. 

The technical challenges of AI  

Opening the roundtable, Professor Sir Nigel Shadbolt, Principal of Jesus College, Professor of Computing Science at the University of Oxford and member of the Council for Science and Technology of the UK government, emphasised the challenge in ensuring advanced AI systems align with human values and intentions. The rapid capabilities demonstrated by recent AI models have raised ethical and technical concerns. Sir Nigel proposed a layered ethical approach, combining theories such as consequentialism, deontology, and contractualism. 

Dr Iason Gabriel, philosopher and senior staff research scientist at Google DeepMind, highlighted the importance of managing the ‘agentic turn’, referring to the emergence of AI systems behaving increasingly like autonomous agents. Gabriel introduced the ‘tetradic model’, advocating democratic governance involving AI agents, users, developers, and society. 

Participants highlighted deeper underlying concerns, notably the necessity of aligning AI creators rather than merely the technology itself. Several recommended ensuring discussions are held in accessible language and relatable contexts, to ensure ethical debates remain directly relevant to people’s everyday experiences and concerns. 

One participant highlighted: ‘We’re asking the wrong question: we shouldn’t be talking about aligning AI but aligning the creators of AI with a moral theory.’ 

The business implications of AI - ‘AI, the new printing press’  

Professor Philipp Koralus, the McCord Professor of Philosophy and AI and Director of the Human-Centered AI Lab, addressed the business implications of AI, emphasised that the future AI ecosystem would include: 

  • A small number of companies developing major general-purpose models. 
  • Many companies developing specialised mid-sized models. 
  • Numerous personalised individual models. 

Professor Koralus urged decentralisation to ensure personal alignment of AI systems, emphasising user control over personalised models. However, concerns were raised regarding privacy implications, the effectiveness of decentralisation, and the disruptive potential of numerous smaller AI start-ups. 

One participant highlighted that whichever AI system it would be, it should be ‘your system, under your control, aligned to you.’ 

The legal and policy implications of AI  

Professor Alondra Nelson, former Acting Director of the White House Office of Science and Technology Policy, architect of the AI Bill of Rights and the Harold F. Linder Chair at the School of Social Science at the Institute for Advanced Study, discussed her experience drafting the US AI Bill of Rights, highlighting its integration into national policy and its influence on state laws. Professor Nelson stressed the necessity for extensive public consultation to rebuild trust in AI systems, advocating inclusive, democratic deliberations at the community level to inform AI policy. 

‘We need far-reaching, ongoing public consultation on AI policy, working in language which is accessible to all those we aim to serve and ensuring that the communities who are poised to be most vulnerable to AI implementation are included. We need democratic deliberation at the community level.’ 

Action points  

The event concluded with attendees identifying several action points to take forward following the roundtable: 

Accelerated actions: 

  • A workshop event on AI alignment and self-perception. 
  • A white paper on AI alignment, led by Sir Nigel Shadbolt and Dr Carissa Veliz. 
  • Greater civil society inclusion in AI alignment, with support from institutions, organisations and government.  

Long term actions: 

  • The promotion of decentralised, personalised AI systems that align with individual user values. 
  • The prioritisation of accessible language and relatable frameworks in AI ethics and policy discussions to ensure public engagement and relevance. 
  • The development of an UK-specific AI Bill of Rights through democratic, deliberative processes such as citizens' assemblies. 
  • The establishment of international legal and political frameworks for AI alignment. 
  • The implementation of multi-layered ethical frameworks to safeguard against AI misalignment. 

The roundtable closed with a poem by Dr Joy Buolamwini, author of the bestseller Unmasking AI, founder of the Algorithmic Justice League, and fellow of the Accelerator Fellowship Programme: ‘Prompted to competition where be the guardrails now? […] [U]nstable desire remains undefeated. The fate of AI still uncompleted, responding with fear: responsible AI beware.’  

The AI alignment project will be driven forward by the Accelerator Fellowship Programme of the Institute for Ethics in AI under the leadership of Dr Caroline Green. 

If you are interested in getting involved, please contact us at aiethicscomms@philosophy.ox.ac.uk