Second-order agency: on deliberately delegating authority to AI

Professor Cass Sunstein Seminar Event

Hosted by the Accelerator Fellowship Programme at the University of Oxford 

On 10 June Professor Cass R. Sunstein discussed the topic of 'agency' in his public seminar. A recording of the seminar is available to watch on the Institute for Ethics in AI's website here

Many people prize agency; they want to make their own choices. Many people also prize second-order agency, by which they decide whether and when to exercise first-order agency. First-order agency can be an extraordinary benefit or an immense burden. When it is an extraordinary benefit, people might reject any kind of interference, or might welcome a nudge, or might seek some kind of boost, designed to increase their capacities. When first-order agency is an immense burden, people might also welcome a nudge or might make some kind of delegation (say, to AI, an employer, a doctor, an algorithm, or a regulator). These points suggests that the line between active choosing and paternalism can be illusory. When private or public institutions override people's desire not to exercise first-order agency, and thus reject people's exercise of second-order agency, they are behaving paternalistically, through a form of choice-requiring paternalism. Choice-requiring paternalism may compromise second-order agency. It might not be very nice to do that.