AI and Creativity Workshop Write-up
Write-up by Konrad Ksiazek, DPhil student based at Balliol College.
The Workshop on AI and Creativity, co-hosted by Baroness Beeban Kidron OBE and Director of Research, Dr Caroline Green, lead of the Accelerator Fellowship Programme (AFP) was hosted on 18 February in Oxford. The workshop explored the challenges which the rise of AI and digital technology places on the future of the creative industry and creative endeavour, together with thought leaders from the sector.
The event started with a warm welcome from Dr Caroline Green. Next, Baroness Beeban Kidron OBE shared her introductory remarks, highlighting the urgent need for the creative sector to respond to the government's consultation about the potential copyright law reform, and to promote greater understanding and awareness of the challenges faced by the creative sector in the digital age.
The day started with a technical session, opened by Professor Sir Nigel Shadbolt. He discussed the origins and nature of AI, its long pedigree, and its rapid rise in recent years, enabled by advances in computing power. He highlighted the increased need for high-quality training data for the ongoing development of Large Language Models (LLMs), explaining why tech companies might have incentives to take a lax approach to copyright and licensing. He also reflected on the challenges facing humanities in the digital age and the need for us to continue to nurture and protect creativity. He was followed by Professor Philip Torr, who highlighted the increased risk of copyright violations associated with the training and deployment of agentic AI. He stressed the importance of transparency in AI development, discussing whether companies should be required to reveal the data on which their products are trained, and reflecting on the risk of the digital transformation to the creative sector, including its potentially transformative effects on nature and availability of creative work. The morning session concluded with a presentation from Vocalise, facilitated by Shehani Fernando, which demonstrated the ease with which LLMs can replicate a person's voice, and the risks this poses.
The second session of the day focused on gathering industry insights. Our first speaker was Laurent Gaveau, Founder of Google Arts and CultureLab. He reflected on his professional experience of building connections between the arts and tech sectors, and the potential for fruitful collaboration between them. He highlighted the need to protect cultural diversity in a world where AI models threaten to amplify the cultural dominance of the Western and English-speaking world. He elaborated on the need to ensure transparency across the tech sector with regard to training data to ensure legal compliance. His contribution sparked a lively discussion spanning over many subjects, which highlighted the importance of protecting the moral rights of artists, and not just their economic interests, including their ability to exercise agency over their works, and recognising this as an important aspect of safeguarding the personhood of individuals and their capacity for human flourishing. The session continued with remarks from Kit Green, a writer and performer, who shared insights from an ongoing project concerning the interactions of pensioners with AI, highlighting art's potential to inform the public about the challenges of the digital age and channel people's fears and concerns. The event continued with insights from Richard Noble, William Latham and Michael Newman, representing the Drawing Centre for Humans and Machines at Goldsmiths, University of London. They shared insights from their work, which develops new creative tools involving AI in collaboration with artists, and investigates human-machine interactions involved in the making of drawings.
The next speaker was Ed Netwon-Rex, the Founder of Fairly Trained, an organisation which certifies LLM developers for fair training data use. He noted that that people, computing and data are the three essential components required to create high-performing AI models - and whereas tech companies happily invest their resources in attracting top talent and increasing their computing power, they currently often expect to get their training data for free. He argued that there is no need to develop AI by harvesting people's work without permission, payment or acknowledgement. He stressed that there are successful fairly trained models, which provide the proof of concept for ethical AI development. He also highlighted the possibility of revenue-sharing and stock sharing as alternatives to licensing, and noted the importance of fairly compensating the authors of training data in view of LLMs' capacity to reduce the availability of freelance creative work.
The final session of the day was devoted to bringing legal and business insights into the conversation. First, Lucky Gunasekara, representing Miso.ai, highlighted the ubiquity of IP Law violations by LLM developers, reflected on the ways technology can be used to monitor copyright infringements, and discussed the potential of intermediate models and cease and desist action in promoting greater legal compliance. Our next speaker was Susie Algere, human rights lawyer and writer. She cast light on the difficulties with the enforcement of copyright law in Britain, in view of the cost of litigation and the difficulty in bringing collective action claims, and argued that this creates a serious rule of law problem. She stressed that addressing this issue matters not just for the jobs and livelihoods of artists, but also as a means of protecting our shared cultural heritage. Finally, Seb Cuthill from the News Media Association, argued that major tech companies have not been paying their fair share to news outlets, highlighting the need for regulation and negotiations to ensure fair value transfer between tech and creative industries, and noting positive case studies from Australia and Canada.
After the final session, Baroness Beeban Kidron took the floor to offer her closing thoughts. She stressed the impact of the workshop in fostering dialogue and connections, noting that the running themes across the day included the need for greater transparency in AI development, the need to respect copyright law and resist its reform, and the fact that the advent of AI placed the creative sector at a critical juncture. She highlighted the need to start arguing for the rights of arts and culture workers in the digital age as fundamental moral demands, and not simply economic concerns. Finally, she warned against placing Britain's world-leading creative industry at risk in the name of growth and economic patriotism.