We are better than this

Drawing on Morozov's famous 2013 definition of tech-solutionism, we can define an equally invidious and problematic way of thinking about the challenges AI presents to humanity:

Regulo-solutionism is taking complex social phenomena to be:
“neatly defined problems with definite, regulatory solutions or as transparent and self-evident processes that can be easily optimized—if only the right policies are in place!”

Sometimes I think regulo-solutionism is connected to nostalgia for the confident belief in divine providence or omnipotent parents. It certainly assumes a powerful law enforcer, more powerful than the organisations and social forces which are creating the problems being regulated against.

There is a very good example of this thinking, and the blindness to more human, less coercive alternative solutions, in this op-ed by Daniel Kehlmann: Not yet panicking about AI? You should be – there’s little time left to rein it in.

The analysis is good:

The [humbling of humanity] by AI is subtler but just as profound [as that by Copernicus, Darwin and Freud]: we have demonstrated that for intellectual activities we considered deeply human, we are not needed; these can be automated on a statistical basis, the “idle talk”, to use Heidegger’s term, literally gets by without us and sounds reasonable, witty, superficial and sympathetic – and only then do we truly understand that it has always been like this: most of the time, we communicate on autopilot.
...
Of course, there is still what Daniel Kahneman calls “System 2”, genuine intellectual work, the creative production of original insights and truly original works that probably no AI can take from us even in the future. But in the realm of “System 1”, where we spend most of our days and where many not-so-first-class cultural products are created, it looks completely different.

Why does Kehlmann think this is worth panicking about? Because of extent of our "System 1" lives:

The entertainment product of the future: virtual people who know us well, share life with us, encourage us when we’re sad, laugh at our jokes or tell us jokes we can laugh at, always on our side against the cruel world out there, always available, always dismissible, without their own desires, without needs, without the effort that comes with maintaining relationships with real people. And if you now shake your heads in disgust, ask yourselves if you are really honest with yourselves, whether you wouldn’t also like to have someone who takes annoying calls for you, books flights, writes emails that really sound like you wrote them, and besides that, discusses your life with you, why your aunt is so mean to you and what you could do to reconcile with your offended cousin. Even I, warning against this technology, would like to use it.

And once we are all using it, the owners of this technology will seek to monetise it in the same ways they have monetised social media and the internet more broadly: by selling our attention to the highest bidders.

We can agree that this is a dystopian future best avoided. And regulation will play a part in that if enough powerful enough governments get moving quickly enough. But regulation of technology with global reach is always too late, too local and too hard to enforce.

However, Kehlmann's own account suggests the importance of a different approach. To see the possibility, consider the history of a different attempt to regulate: discrimination on grounds of gender.

Countries like the UK have very powerful - on paper - anti-discrimination laws and yet decades later we still see misogyny and the structural injustices of patriarchal power, such as glass ceilings and gender pay gaps. Regulation was necessary but not sufficient to bring about the required changes. What we know is that it needs supplementation by education (of men primarily) and empowerment (of women primarily): we need to help people change to be better versions of themselves.

So back to genAI and Kehlmann's fear that it will completely fill the space occupied by human-human "System 1" communication and interaction. How do we stop this?

Regulation would have to either control access to this technology, and that is unlikely to be effective: we struggled to police the manufacture of nuclear missiles and those need installations visible from space. Or it could ban a certain business model, what Shoshana Zuboff calls the 'commodification of humans', whereby the technology is funded by manipulating users to the benefit of those who will pay for their attention. But it is too late to do that given that it is already being funded by the venture capital model of investment - which presupposes that monetisation will be achieved by having large numbers of users and in ways that don't put those users off (e.g. by charging).

We must not stop trying to produce regulation which will mitigate some of the worst effects, but nor must we stop with legislation. This is the lesson from anti-discrimination: rules alone won't break up the patriarchy.

Let's go back to Kehlmann's list of System 1 human-human interactions which will be replaced by genAI:

  • people who know us well,
  • share life with us,
  • encourage us when we’re sad,
  • laugh at our jokes,
  • tell us jokes we can laugh at,
  • always on our side against the cruel world out there,
  • always available,
  • always dismissible,
  • without their own desires,
  • without needs,
  • without the effort that comes with maintaining relationships with real people

Notice how the list slowly moves from things we are sure it is good for us to value to things we are sure it is bad for us to value, via some where we begin to think "only in some contexts". One could write a similarly shifting list starting with good things in a relationship and ending with incel-like misogyny; from a relationship of equals to using others as a mere means to one’s own ends.

Suppose that, instead of genAI, we were contemplating a scenario where actual human beings behaved in all the ways on Kehlmann's list - some version of the patriarchal fantasy of the Geisha or 1950s housewife. What we would say in such a situation is that a person who is meeting those conditions on the bottom half of the list is not really giving us what we want and need in the top half of the list. Their sharing, encouragement, laughter, support is all inauthentic. They are using us in a similar way that we are using them. They are not actually meeting our needs for human relations but exploiting our weaknesses for self-deception.

The same is true of genAI, which also exploits our weakness for "convenience" (e.g. it "takes annoying calls for you, books flights, writes emails ..."). And now we see a different way of casting the problem of an AI-fuelled dystopia and the possible solutions: find ways to make people stronger.

Sometimes it is nice to have your weaknesses pandered to, but when you know that is happening, the accompanying guilty feeling is significant: you are better than this.

As well as regulation, we need to educate and empower the future users of genAI to know they are better than that, to not fall for the trap but instead find and value their own strengths. Bringing out the best in people is a slow, difficult process fraught with set-backs and frustrations. Ask any teacher. But it is worthwhile and is the only hope for the future of humanity. Ask any teacher.1


  1. To be hopeful in bad times is not just foolishly romantic. It is based on the fact that human history is a history not only of cruelty, but also compassion, sacrifice, courage, kindness. What we choose to emphasise in this complex history will determine our lives. If we see only the worst, it destroys our capacity to do something. If we remember those times and places – and there are so many – where people have behaved magnificently. This gives us the energy to act, and at least the possibility of sending this spinning top of a world in a different direction. And if we do act, in however small way, we don’t have to wait for some grand utopian future. The future is an infinite succession of presents, and to live now as we think human beings should live, in defiance of all that is bad around us, is itself a marvellous victory. - Howard Zinn 


You'll only receive email when they publish something new.

More from Tom Stoneham
All posts