Calls for AI regulation vs being an adult

One of the questions I was asked after a talk to a multi-disciplinary audience yesterday was:

There was a movement where a party petition to stop or slowdown development of AI temporarily. Can you give us your thought on their position?

This refers to the Future of Life Institute open letter in March 2023 calling for a six month pause in 'training of AI systems more powerful than GPT-4'.

For now I will set aside the question of whether the open letter's focus on existential threats and look at it through the lens of regulo-solutionism.

The letter was a call to 'all AI labs' to make a voluntary pause. The 33,000 signatories included many people who own or run such labs, who were thus in positions of power to make such a decision, including, famously, Elon Musk. And yet none did.

Reflecting on the letter six months later, Max Tegmark treated it as a classic collective action problem with regulation the only solution:

Tegmark is lobbying for an FDA-style agency that would enforce rules around AI, and for the government to force tech companies to pause AI development. “It’s also clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very concerned themselves. But they all know they can’t pause alone,” Tegmark says. Pausing alone would be “a disaster for their company, right?” he adds. “They just get outcompeted, and then that CEO will be replaced with someone who doesn’t want to pause. The only way the pause comes about is if the governments of the world step in and put in place safety standards that force everyone to pause.”

Obviously part of the problem here is that most AI labs are in private companies funded by investors who are keen to get as big a slice as possible of the alleged $16trn addition to GDP coming from AI, and this creates a series of constraints upon those companies. The CEO signing a letter costs the investors nothing, but acting on it is not going to be tolerated.

However, a much bigger part of the problem is an extraordinary ethical immaturity in these people. Signing such a letter and not acting on it, but then calling for regulation to force one to act on it (as happened later at the November 2023 AI Summit) is not much different from saying: I am about to do something very bad, please stop me!

We can agree that unilateral action to stop training more powerful foundation models would have little or no direct effect. But that is not a sufficient reason not to do it. Firstly, it would mean that you did not personally contribute to the bad outcome, the possible existential threat. If you really think other people need to be forced to stop doing something, carrying on doing it yourself bespeaks a lack of moral agency. You may think that there is a need to have a law prohibiting murder, but surely the law is not the only reason not to kill people? Would the CEO level signatories of this letter happily commit murder if there was no law against it? Of course not (I am willing to believe), but why they can't apply that same ethical thinking to their jobs is puzzling.1

Secondly, as well as being morally right, unilateral action can have indirect effects. The highly public resignation of Geoffrey Hinton from Google was such an action, though its effects were limited by the arrogant way he framed it: he had to resign so he could speak freely and warn others, rather than he had to resign because he couldn't in good conscience carry on doing something he saw as extremely dangerous. To see the contrast clearly, compare Hinton's departure from Google with that of Timnit Gebru.

If some of the high profile signatories of that open letter had shown that they were prepared to take a personal risk - from losing their job to losing large amounts of money - an act on what they signed, then that might have encouraged others to do the right thing. Perhaps not immediately, but over months there could be change in the industry if the leaders refused to do want the investors wanted. It would have been an interesting time to find out if there are enough immoral people out there to run the AI boom.2

In fact, unilateral action wasn't needed. What was needed was co-operation. Throughout human history groups of people have come together to agree solutions to collective action problems without any legal system to enforce them. They have worked and served us well. Yes, people do cheat, but regulations and justice systems don't stop that either.

What puzzles me about the open letter is why the Future Life Institute (or anyone else) didn't get all those powerful signatories in a room and say:

You have just signed this letter. It is a formal commitment to the principle of a pause if all the others also pause. Let's agree to do that now like the adults we are.

Actually, I am not that puzzled. The answer is clear: 40 years of neoliberal hegemony has made it impossible not to think of the agents who might make such an agreement, be they individuals or companies, as essentially in competition with each other. But that is a myth: humans are and have always been more fundamentally co-operative than competitive.

Our true nature has been suppressed for profit, turning the most powerful people on earth into moral adolescents calling out for someone to stop them doing the wrong thing again.


  1. One reason is likely that they are attracted by certain forms of utilitarianism. 

  2. Perhaps we should add immoral people to the limited resources on earth that might provide hard constraints on the projected development of AI, after energy and water? 


You'll only receive email when they publish something new.

More from Tom Stoneham
All posts