Autonomy, Safety, and Regulo-solutionism

This is an abstract for a talk in July at the European Association for the Study of Science and Technology (EASST) and the Society for Social Studies of Science (4S) conference in Amsterdam. Any suggestions or comments which will help develop the full paper much appreciated!

When Stanislav Petrov disobeyed orders on 26 September 1983 he probably saved the human race from mass extinction. For any orders, rules or regulations, there may be such a moment when it is better to disobey than to obey. No rule can foresee all such situations. But there are also lesser cases of valuable disobedience. The teenager who breaks school uniform is engaged in valuable self-expression. When James C. Scott jaywalks as 'anarchist calisthenics' he is resisting the de-humanising effect of a society which micro-manages its citizens. When Greta Thunberg disobeys a direct police order to stop a protest, she is claiming that the climate emergency justifies civil disobedience. Rule-breaking can be a good thing in many different ways.

How does this change our thinking about creating safe autonomous systems, from driverless cars to therapy chatbots? Current thinking about AI Safety is in thrall to a mode of thought we might call "regulo-solutionism": the solution to a social or technosocial problem will always be rules, regulations or laws. The vision of both politicians and tech companies is that we should regulate AI and then require those rules to be built into autonomous systems as 'guardrails'.

This will fail because regulations are always too late, too local and too hard to enforce. But it is also misguided, for it misses an opportunity to build autonomous systems which reflect the value of human autonomy. To build a safe AI future, we must first understand what we value about human autonomy.

You'll only receive email when they publish something new.

More from Tom Stoneham
All posts