Artificial what? Intelligence as reason-giving vs problem-solving
November 19, 2024•1,142 words
Philosophy isn't big on consensus, but there is widespread agreement, implicit rather than articulated, on one claim:
Problem-solving ability is not the same as intelligence
The standard definition of intelligence in AI and cognitive psychology which derives from Allan Newell and Herbert Simon, and was popularised by Steven Pinker in his influential 1997 book How the Mind Works, which is that ‘intelligence is the ability to attain goals in the face of obstacles’ (62).
This is simply false. Intelligence does (sometimes) help a creature attain goals in the face of obstacles, and we often assess intelligence by looking at how creatures respond to obstacles placed in the path to their goals (though as a way of assessing human intelligence this seems to have a strong bias towards certain social and educational backgrounds, including STEM disciplines), but the definition doesn’t actually tell us anything about what intelligence is, just what it sometimes does.
So at best it is an operationalisation of the concept of intelligence for certain experimental purposes, though, as I will explain, it may not even be a very good one of those.
The philosophical conception of intelligence is significantly different:
Intelligence is sensitivity to reasons.
A creature is intelligent on this conception insofar as it makes decisions about what to do and what to believe on the basis of reasons which it recognises as justifying that decision.
When an intelligent creature does something, we can always ask ‘Why?’ and expect an answer which gives their reasons, i.e. what they took to justify their decision, why they thought that was the right decision to make in the context. Reasons which refer to good and bad features of the action and its consequences.
If you are able to recognise which reasons for doing something are good reasons - i.e. justify the decision - and which are not good reasons, then you may well be able to solve quite a lot of problems, to attain your goals in the face of obstacles.
But for that you would also need something else: the ability to discover the good reasons. You can only recognise that something is a good reason to do such-and-such if you are presented with it as a possible reason. Which is why some humans are better at solving problems than others and even those who are good at solving problems in one area may be bad at solving problems in other areas. Problem-solving by humans requires intelligence plus domain-specific knowledge and imagination.
So intelligence is not sufficient for problem-solving, but is it necessary? Must a creature that is good at solving problems (in some domain) be intelligent?
My view is that what is currently being presented as ‘Artificial Intelligence’ provides counterexamples to the necessity of intelligence for problem-solving. Lots of current AIs are very good at solving problems, from playing games to folding proteins to summarising texts, but none have sensitivity to reasons.
This matters when we look at decisions where the good answer, the one which displays intelligence, cannot be conceived in terms of attaining goals or solving problems.
Consider AlphaGo. It is better than any human at attaining the goal of winning at Go in the face of obstacles (at least, obstacles in the form of moves by the other player). But moves in a game can be evaluated in other ways than just whether they increase the chance of winning. Suppose the players are ill-matched. Then the better player might make a generous move: one that opens up a good move for the weaker player.
We cannot evaluate whether a given move was generous or not without considering the reasons for making it. If the player had simply not noticed the good move it was opening up for their opponent, had thought this was the move which maximised their chance of winning, then it was not generous. Only a creature which is sensitive to reasons can make a generous move. AlphaGo cannot because, despite being very good at winning Go, it isn’t intelligent, it isn't sensitive to reasons.
When we move away from games, there are plenty of applications of AI where goal-attainment is all that matters and sensitivity to reasons is a distraction. AlphaGo’s cousin AlphaFold is a good example. But the distinction between problem-solving and sensitivity to reasons is important in many use cases for AI, especially generative AI.
Let’s consider the infamous trolley problems for autonomous vehicles: stay on the road and kill five or swerve onto pavement and kill one? Given the goal of an AV is to get to its destination without harming any pedestrians, it now faces an obstacle to that goal which is insurmountable. It has to fail.
So if we think of intelligence as attaining goals in the face of obstacles, we would need to have a secondary goal in order to have a problem for intelligence to solve here. And how we specify that goal ‘solves’ the trolley problem: e.g. ‘minimise harm’ gives one answer but ‘respect vehicle-free pedestrian spaces’ gives another.
The choice between those two secondary goals requires sensitivity to reasons, it requires evaluating different reasons for action and deciding to act on the one that is the best reason. Crucially, that isn’t something we can do by predicting what most people would do, because ‘other people would do it’ is never a good moral reason. It may be a good prudential reason - no one will blame you - but we know it has been used to justify terrible moral outrages throughout human history. Since genAI just predicts human responses, it can at best avoid violating social norms.
Like being generous in a game of Go, making the right moral decision is not just a matter of outcomes but also reasons. Which reasons you take to be good reasons for your decision are crucial in determining whether it was a morally good decision.
What we value most in human intelligence is not that it makes us reasonably good at solving some problems some of the time - though that has been pretty important to us for millennia - but that it allows us to do things for the right reasons.
The conflation of intelligence with problem-solving is leading to dangerous developments in AI.
The artificial problem-solvers that we have built are just that and nothing more. We mustn’t take their ability to solve problems as evidence of intelligence.
And we must resist the temptation to employ them in domains where what we actually want is sensitivity to reasons rather than problem-solving. This is already happening with the use of AI in criminal justice, staff recruitment, research evaluation, in teaching and psychotherapy.
By misunderstanding intelligence, we erroneously reconceive these tasks in terms of problem-solving and overlook the fact that what really matters to good decision-making in these cases is deciding for the right reasons.