The problem of second-order bias
April 3, 2024•604 words
We are all now familiar with problems of bias in various algorithms (now called AIs) and also that what these algorithms are doing is picking up, reflecting, and sometimes amplifying bias in the data, which often reflect biases in society.
Might there also not be a similar problem of bias in how we - as a society - think about the uses of AI reflecting existing biases about different types of human knowledge and understanding?
There is a dominant narrative that the AI revolution is inevitable, that AI will soon pervade our lives as thoroughly as the internet has.1 Some people are enthusiastically embracing this future and finding more and more uses for AI. Others - myself included - are more sceptical, seeing that AI only produces a poor simulacrum of what they value in human achievements, doubting that more and more, faster and faster is always better and better.
The societal narrative of the inevitability of AI and the potential for good uses creates a bias in favour of the first group, giving them more and more power and influence. To some extent that is a combination of a preference for positivity and the strength of the neo-liberal hegemony with its commitment to infinite growth. However, there is also something else going on when you consider the disciplinary backgrounds and expertise which also seem to track this divide: the humanities and social sciences are predominantly sceptical, the STEM subjects, especially those with very practical outcomes like biomedicine and product development, are predominantly excited by the potential. More results! Faster!
These different disciplines focus on different aspects of human intellectual achievements: the latter on solving problems, getting things done; the former on deep understanding and - let's call it by its name - wisdom.
Thus the discussion of the potential of AI, the search for good uses, driven by an overwhelming belief that this future is inevitable, amplifies and reinforces the existing social bias towards the practical, problem-solving, results-driven forms of knowledge and against the reflective, discursive, personal forms of understanding. This is the problem of second-order or meta-bias: the drive to be positive and find uses for AI will even further reduce the value of the humanities and social sciences.
Every time we describe people who play with AI tools (toys?), who experiment with them to see what they can do, as 'innovative' or 'early adopters', we stigmatise those who don't. Perhaps they don't for reasons worth stigmatising, but perhaps they are simply unimpressed. Perhaps they don't think 'How cool is that?' but 'Given I will need to use all my critical skills and knowledge to decide whether I can trust the output, why not cut out the middleman and do it myself?' And that attitude may reflect what they seek to achieve, that they want understanding not 'solutions'.2 By failing to recognise this difference in how we talk about AI take-up, we fail to address the existing biases in our social attitudes to different types of knowledge.
-
In 2021 the UN estimated that 2.9 billion people had never been online, never used the internet. It is hard to believe that the inevitability of AI will pervade everyone's life, just those who are already rich and privileged. It may be the epitome of a 'first world problem'. ↩
-
Another way of putting it I learned from Vincent Ginis: AI is worth using for problems where the solution is hard to generate but easy to verify (e.g. protein folding). So in areas of human intellectual endeavour, like philosophy!, where answers are both hard to generate and hard to verify, it is largely irrelevant. ↩