Civic AI Conference

Image shows speaker and audience at the Civic AI summit event

On the 25th of March the Accelerator Fellowship Programme of the Institute for Ethics of AI, University of Oxford hosted the Civic AI Summit at Rhodes House, Oxford. The summit was hosted by Dr Caroline Emmer De Albuquerque Green, Director of Research & Leader of the Accelerator Programme at the Institute for Ethics in AI, and Ambassador Audrey Tang, Senior Accelerator Fellow at the IEAI.

The summit, which aimed to introduce, demonstrate and co-develop the ‘Civic AI approach’ and 6-packs of care framework, brought together a diverse group of people with different levels of engagement with AI: academics in disciplines such as computer science, philosophy and law; individuals in the tech and private sector; civil society leaders and members of the public. The hosts of the summit stressed that they wanted this audience to be as broad as possible, since Civic AI is all about bringing people from different backgrounds together and finding common ground. Throughout the day, audience members and speakers were encouraged to be critical, to discuss the overlooked, and shape Civic AI together.

Three key insights were explored during the summit:

  • The nature of Civic AI
  • Civic AI in practice, and
  • Challenges for Civic AI

The Theoretical Foundations of Civic AI

Civic AI is a framework which advocates for uses of AI that serve people and strengthen communities, rather than prioritising wealth extraction. The ultimate goal is to surface common ground that facilitates joint decision making on the development and use of AI systems—to allow diverse groups to act together and develop AI that works for everyone.

Opening the first half of the summit, Dr Caroline Green examined three key premises of Civic AI: (1) we thrive in communities which care, (2) despite our differences there is common ground to be found, and (3) we need contextual solutions that work for and with people, rather than blanket one size fits all approaches.

The keynote presentation was given by Joan C. Tronto, Professor Emerita of Political Science, University of Minnesota, whose ethics of care approach is foundational to the 6-packs of care framework. The care ethics approach puts human relationships and our need for care at the forefront, where care, roughly, is an activity that we do to maintain our world and communities so that we can live in them as well as possible. Tronto examined how a theory of care could help us advance Civic AI, as she puts it: “not only is caring better when it’s democratised, but democracy is better when it is caring”.

Our second keynote presentation was given by Audrey Tang. Building on Joan’s theory of care, Tang described care as a political practice rooted in communities that choose to tend to what they love. Tang stressed that AI systems cannot serve a democratic system without care. And that, to accommodate the plurality of individual needs, we need to scale down and adopt localised systems.

What Does Civic AI Mean in Practice?

Throughout the day, several practical use cases of Civic AI were explored. 

  • Fostering Democratic Deliberation

Tang explored the use of Civic AI in facilitating democratic deliberation around synthetic media in Taiwan, where AI tools called Kami were used to moderate discussion by sorting arguments, and making sure no voice was drowned out. Tang explained how, rather than polarising us, or deciding for us, AI can be used to foster collaboration, ask questions and start conversations: aiding the messy luminous process of listening to those with whom we disagree.

Vitalik Buterin, a computer programmer best known for co-founding Ethereum, expanded upon this aspect of Civic AI. Buterin described how chat-bots can act as a mediator of viewpoints; giving communities which have valuable ideas to share with one another but different styles of speaking the tools to understand one another. Such systems provide a stark contrast to current large language models which are homogenising, and risk flattening the heterogeneity of human insight.

  • Mediating Human Connection

In the third keynote of the day Rosalind Picard, an electrical engineer, computer scientist and Professor of Health Sciences at MIT, urged us to work towards technology that truly makes life better. One example of which, and a second practical application of Civic AI, is a smart wearable AI agent which alerts care givers when the watch wearer is suspected to be having a seizure. Here, the AI agent is designed to work with people to bring about caring—mediating human connection rather than replacing it.

  • Restructuring AI Governance Processes

In a panel discussion, Karina Palyutina, a Standards Specialist at Nokia, and Zarinahh Agnew, a Research Director at the Collective Intelligence Project, explored how Civic AI approaches could change the way we govern AI systems. Palyutina examined how Civic AI could provide strategies to change the way that AI standards are made, and Agnew discussed Weval, a platform which allows users to create AI evaluations themselves, evaluating AI systems according to what matters to the individual. These tools help put Civic AI into practice because they work from the ground up, ensuring that AI-systems are designed and monitored according to local culture and custom rather than being externally set.

These practical uses of Civic AI prove that there is no technological barrier to implementing civic AI. Techncally speaking, Civic AI is very much possible.

Challenges Facing Civic AI

Despite the practical feasibility of Civic AI, discussions throughout the day raised important challenges facing the development of Civic AI.

  • Care washing

Several speakers raised some points of caution at considering Civic AI as a form of care. Anya Daly, Senior Lecturer in Philosophy and Ethics at the University of Tasmania, discussed care washing—the idea that systems can appear to be caring without being motivated by it. As Tronto similarly urged during her keynote, we need to theoretically situate AI, not as a tool, but an institution. Systems based on wealth extraction are at most apparently caring.

  • The Institutional Context

Another challenge facing the development of Civic AI is the institutional context in which existing AI systems are organised. A panel chaired by Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, and including Davide Crapis, AI Lead at Etheruem Foundation; Iason Gabriel Ethicist leading the AGI & Society team at Google DeepMind; and Jenni Tennison, an expert in AI and data, illuminated the need for mechanisms to re-claim power at the local level, the need to create infrastructures which empower different people to express their preferences, and the need to push back on profit focused institutional contexts.

  • Accessibility

Clenton Farquharson, a nationally recognised leader in adult social care, discussed the challenges facing building systems that reflect a diverse humanity. To build Civic AI from the ground up, we need to include the voices of all of those affected by it and face the reality of pluralism. Politics must be at the core of this project.

One wonderful illustration of individuals working to address inaccessibility in AI was discussed by Tenzin Gayche, Chief AI Officer at Monlam, who have developed an LLM translation tool to digitalise Tibetan language, and make accessible information and knowledge about AI to Tibetan speakers. 

  • Human Centric Ethics

As a final challenge Gashe Lobsang, a Tibetan Buddhist Scholar and Programmer and the founder of Monlam discussed how Buddhist philosophy can help us diagnose current limitations of AI. One challenge to keep in mind when developing Civic AI is to move away from human centrism. Caring AI should not only keep in mind the needs of people but also consider the world in relation to non-human animals and the environment. 

Taking Stock

Throughout the day, speakers drew on analogies illustrating key components of the Civic AI approach. Picard gave the analogy of a house built on rock rather than sand to express the need for solid and carefully curated foundations for AI systems; Buterin analogized civic AI to a bicycle we ride rather than a bus we sit on, to reflect the need for AI to empower humans rather than decide for us; and Tang likened Kami to a garden of gardens, rather than an independently monitored monoculture to reflect the need for localised and plural systems. Such analogies reflect the diversity in the way that we think of and approach Civic AI, and crucially that we, the people, are the real superintelligence. Going forward we should work to foster spaces in which we can speak the unspeakable, name issues, and discuss moving forward to solutions.

“I have a lot of questions about technology. But I felt powerless that I couldn't really influence this reality. So, most of the time, I just choose to be silent. But today, I got the courage from this event, from this circle that talking things out loud itself has the power, and it is valuable” – Audience Member