Insights
The Accelerator Fellowship Programme provides deep insights and innovative research that address urgent ethical challenges in AI and informs global policy and practice.
Through our work, we aim to make impactful contributions to AI regulation, industry practices, and public awareness while fostering long-term alliances essential for addressing AI’s risks and opportunities.
Topics of research
Bias in AI Systems
Bias in AI systems arises when algorithms produce unfair, discriminatory, or skewed outcomes due to biased data, flawed design, or systemic inequalities. AI bias can reinforce social prejudices, disproportionately affecting marginalised groups in areas such as hiring, policing, and healthcare. Addressing bias requires diverse and representative datasets, transparent model design, and continuous auditing to ensure fairness. Ethical AI development must prioritise accountability and inclusivity to prevent harm and build trust in AI technologies.
Dr Joy Buolamwini: Founder of the Algorithmic Justice League, computer scientist, author, artist, and AI researcher will address this during her time with the Accelerator Fellowship Programme.
Practical Application of AI Ethics Frameworks
The practical application of AI ethics frameworks involves translating ethical principles—such as fairness, transparency, and accountability—into real-world AI development, deployment, and regulation. This includes creating guidelines for responsible AI use, integrating ethics into product design, and ensuring compliance with emerging policies. A key challenge is balancing innovation with ethical safeguards, ensuring AI systems align with societal values while remaining effective and scalable.
Prof Alondra Nelson: Former Acting Director of the White House Office of Science and Technology Policy, and the Harold F. Linder Chair at the Institute for Advanced Study will address this during her time with the Accelerator Fellowship Programme.
Digital Human Rights and Implementation of the AI Bill of Rights
Digital human rights focus on protecting fundamental freedoms in the age of AI, including privacy, non-discrimination, and equitable access to technology. An International AI Bill of Rights could formulate in human rights terms principles such as safe and effective AI, algorithmic transparency, and human control over automated systems. Implementing these rights requires legal, technical, and policy measures to facilitate broad access to right-supporting AI technology and to prevent AI-driven harm, particularly for marginalised communities.
Prof Yuval Shany: International law expert and former Chair of the UN Human Rights Committee will address this during his time with the Accelerator Fellowship Programme.
AI, Bias, and Noise
As intuitive statisticians, human beings suffer from identifiable biases—cognitive and otherwise. Human beings can also be “noisy” in the sense that their judgments show unwanted variability. As a result, public institutions, including those that consist of administrative prosecutors and adjudicators, can be biased, noisy, or both. Both bias and noise produce errors. Algorithms eliminate noise, and that is important; to the extent that they do so, they prevent unequal treatment and reduce errors. In addition, algorithms do not use mental shortcuts; they rely on statistical predictors, which means that they can counteract or even eliminate cognitive biases. At the same time, the use of algorithms by administrative agencies raises many legitimate questions and doubts. Among other things, algorithms can encode or perpetuate discrimination, perhaps because their inputs are based on discrimination, or perhaps because what they are asked to predict is infected by discrimination. But if the goal is to eliminate discrimination, properly constructed algorithms nonetheless have a great deal of promise for administrative agencies.
Prof Cass Sunstein: Founder and Director of the Behavioural Economics and Public Policy Program at Harvard Law School will address this during his time with the Accelerator Fellowship Programme.