(Super)intelligence and power

There is a lot of discussion about superintelligent AI and existential threats to humanity at the moment, and calls for regulation.

Now I don't want to suggest that there shouldn't be some form of precautionary regulation in this space, however hard it is to imagine an effective global policing of a technology which is harder to detect than nuclear weapon manufacture. Nor do I want to comment on why regulation may in fact be counter-productive, since it doesn't change the motivations of bad actors, merely their risk profile. What I want to reflect on are the assumptions underlying the very idea that superintelligence is a threat. In particular, there is a tendency in those trying to explain the threat to move seamlessly between talk of intelligence and power.

Even the very thoughtful - and cynical about #AIHype - Guardian journalist Alex Hearn recently said:1

If you have something that is extremely powerful, that is smarter than any one person, that can destroy humanity.

Now for such claims to make sense, 'power' has to mean 'power over people', the sort of power that governments, corporations and leaders of such powerful organisations have. The sort of power that Altman and Musk and even Hinton have. It cannot just mean 'computing power'. Any world leader who controls a nuclear arsenal has the sort of power which creates an existential threat to humanity. But equally, corporations which have the power to externalise the environmental costs of their business, to manipulate discussions about climate change, and to coerce politicians through bribes (aka donations) and threats (of economic harms), have the power to be existential threats to humanity.

So for a computer - an AI - to be an existential threat to humanity, it needs more than computing power, it needs political power in that broad sense. The power to influence the structures, including economies, which ultimately determine what happens to less powerful people.

Why would we think a 'superintelligent' AI would have political power?

The first thing to ask is about the kind of person who is telling us superintelligence is an existential threat. They are all high achievers, undoubtedly possessed of significant intellectual abilities in their fields of expertise, and in positions of significant power. I suspect they would be inclined to describe their intellectual abilities in terms of the possession of 'very high intelligence' without realising that they are there presupposing the coherence of a property of non-domain specific intelligence or 'IQ', and consequently, prone to Dunning-Kruger effects. Furthermore, They show little intellectual humility and appear to attribute their personal success, and the consequent positions of power, to this 'very high intelligence', discounting the significance of wealth, privilege, patronage and luck. In other words, they explain the lives of themselves and their friends under the comfortable assumption that it was their intelligence which gave them their power.

However, if we look at the distribution of power in society, the correlation with intelligence is weak. Sometimes very smart people do end up in positions of power, though rarely because of their intelligence rather than some contingent feature of the situation. But more often, people in power achieve it not through intelligence but through opportunism, risk-taking, ruthlessness and ambition. Those are not features of human intelligence and we have no reason to think they will be features of superintelligence.

How would superintelligence respond to its own lack of power?

One scenario for existential threat often mentioned is that the superintelligent AI will realise that the humans who switched it on could also switch it off, so it starts thinking that self-preservation requires controlling humans somehow. It regards the solution to its own weakness as seeking more and more power. Now that feels like a pattern of thought more closely associated with mass gun ownership than intelligence. Plenty of the smartest humans prefer to trust that others won't harm them - even if they are sometimes wrong - than seek power over anyone who poses a threat. And plenty of stupid people respond to their own weakness by becoming bullies.

To go back to where I started, I have no problem with precautionary regulation. I doubt it will be effective and suspect it may even undermine itself with perverse incentives for bad actors to be worse, but it may have good effects in terms of securing a widespread global consensus on caution, as it has with nuclear weapons.

It is not superintelligence itself which is the existential risk. It is reasonably smart AI created by people who live in denial about the true sources of power in society, and thus end up creating machines which embody the bad personality traits of greed, ambition, and ruthlessness. What we should fear is not artificial intelligence but artificial Ayn Randism.2


  1. Guardian Today in Focus podcast (7 June 2023) 

  2. Please don't call it 'Objectivism' - that word has a precise technical sense and it is nothing to do with the fantasy of human nature propounded by Ayn Rand. The use of that label is classic Orwellian 'doublethink'. If they can be made coherent enough to merit a label, her views are likely closer to Subjectivism. 


You'll only receive email when they publish something new.

More from Tom Stoneham
All posts