Minds and theories of mind
May 22, 2024•925 words
Back when I was a nipper of a grad student, one of my supervisors and another grad student edited a journal special edition of 'state of the art' papers called Folk Psychology: The Theory of Mind Debate. While this wasn't directly relevant to my own research (on self-knowledge) it was close enough for me to follow the debate.
So when this headline popped up on Monday, I was fascinated:
AI Outperforms Humans in Theory of Mind Tests: Large language models convincingly mimic the understanding of mental states
The debate that Davies and Stone were corralling was about how humans manage to predict each other's behaviour - that ability is referred to as 'folk psychology' on a par with 'folk physics' which allows us to predict the behaviour of inanimate objects.1 There were, and still are, two rival accounts: we have some sort of innate theory or model which generates predictions (the theory-theory) and we have an ability to imagine ourselves into someone else's situation and observe what we would do (the simulation-theory).
Now my first reaction to the research reported by IEEE and published in Nature Human Behaviour was: so now we know there is a third option! My second reaction was to reflect that we always knew there was a third option, but discounted it as an answer to how humans did folk psychology: the Giant Lookup Table.
This was rejected for a combination of two reasons. One was a version of the classic Chomskian poverty of stimulus argument: humans just didn't have access to enough data to build such a lookup table. The other was an argument presented by Christopher Peacocke in 1983 in the appendix to his book Sense and Content which persuaded us that a being whose behaviour was driven by a Giant Lookup Table would not in fact share our cognitive capacities, even if we couldn't tell the difference.
Now LLMs are so impressive in what they can achieve precisely because they are trained on so much data as to address the poverty of stimulus argument.2 They also go beyond mere Lookup Tables by being able to extrapolate (=guess) to new cases with a fair degree of alignment with human judgement about those cases,3 and we get to the question of how they do that. Both theory-theory and simulation-theory explain the ability to extrapolate to new cases. We can be fairly confident that current LLMs do neither of these (and I am prepared to go further and argue that merely scaling up the current approach will never achieve either of those) so there does seem to be a practicable third alternative in the 'theory of mind debate', though one we can be pretty sure humans don't use.
Which takes us back to 1983 and Peacocke's argument: cognitive capacities consist not just in being able to behave in certain ways, but in having that behaviour caused in the right way. (Aside for philosophers: if you don't like Peacocke's version of this causal argument, Davidson provides a less reductive alternative.) One way to think about this is to anthropomorphise what LLMs are doing by way of thought experiment:4
Imagine a person with no theory of mind. Let them read masses of fiction and history then ask them to predict human behaviour in certain contexts. Rule out that they do this by constructing an explanatory theory of beliefs and desires (theory-theory) or that they are able to imagine themselves into the context and observe what they would do 'offline' (simulation-theory). Instead suppose they see humans as indeterministic systems and on the basis of their reading (training), assign probabilities to various courses of action.5 There is no reason to doubt that this approach could give them similar accuracy to you and me.
It seems to me that one conclusion of this thought experiment is obvious: the person in our thought experiment has not bootstrapped themselves to a theory of mind. They have no ability to understand other human beings, merely predict them. And while the theory of mind debate was often dramatised in terms of our ability to predict human behaviour - and tests for theory of mind operationalised in those terms - it was really about understanding. Yet again, LLMs show us that many of our tests for cognitive capacities are merely tracking proxies.
-
Of course, folk psychology isn't perfect, just like folk physics. The point is that both work remarkably well in most situations we actually encounter. ↩
-
Well, start to address. The Interim Report of the International Report on the Safety of Advanced AI (17 May 2024) notes "by 2030 shortages of accessible online high-quality text data could slow the rate at which models can be productively scaled" (24). Of course, we may run out of energy and water for the data centres well before then. ↩
-
I noted above that folk psychology is not a perfect predictor. What would be really impressive would be if LLMs correctly predicted what humans will do, not merely what humans will predict that other humans will do. ↩
-
All thought experiments simplify and one clear simplification here is that I assume LLMs achieve their performance with pre-training alone. Of course they don't, which makes what they do achieve even less interesting. ↩
-
This would be on the basis of frequencies and must be how any ML based AI works. There is an interesting debate over whether this approach to understanding probability is correct, but it almost certainly works where the system being predicted is not in fact stochastic, like a human being. ↩