Responsible AI in SHAPE research
March 4, 2026•725 words
This is the text of my 5 minute opening pitch in a debate about whether responsible AI use in SHAPE (Social Sciences, Humanities, and the Arts for People and the Economy) research is possible. The final vote was 17:16 against AI in research.
Unfortunately, I don't disagree with Oonagh. However, I do want to talk to those in the audience who were thinking: But what about my research? I don't do anything that will achieve the goals Oonagh has talked about.
You may have spent a research day last week in ways similar to mine: sitting with a postgraduate researcher looking at 17th century texts, trying to unpick the sources for some intriguing remarks by Berkeley - yes, him - on Confucius.
What does that have to do with AI? How can my expertise in e.g. early modern philosophy contribute to this societal challenge? It can't, but our personal choices can.
I want to talk about:
SHAPE research where a specific kind of AI is used to replace activities otherwise carried out by humans exercising what we all now call 'cognitive skills', but which would be better called 'academic expertise and judgement'.
The single most important difference between AI systems is whether they predict humans or not. Some predict the behaviour of human populations (e.g. traffic management), some predict the behaviour of individual humans (credit scoring), and others predict how a human (which human?) would respond to the prompt, question, request.
In particular, this is achieved in current commercial technologies through training on massive datasets of billions of parameters, requiring hyperscale data centres and fast compute for inference.
10 General Reasons not use these consumer AIs
It replaces human value creation with machines - labour with capital - reducing knowledge creation to output creation
The efficiency in both time and money is an illusion
The energy demands are destroying the natural environment
And will undermine the physical infrastructure of our society (national grid, water systems)
The technology relies on labour exploitation - to mine rare earths and train models
Thereby reinvigorating the Global North colonial value extraction of the past four centuries
It concentrates even more wealth and power into a few companies of vast scale able to defy nation states
Who ensure that these tools are aligned with their values (and yes, they are rich in systematic value-alignment)
Which include unregulated free market capitalism, militarisation and massive systemic income inequality
All of which have terrible health and well-being effects for the 99%
5 Specific Reasons not to use it is SHAPE Research
Counter-productivity ... If email is souch more efficient than handwritten notes in pigeonholes, why does it take up so much time and cost so much money? Do we spend less time commuting because of cars?
Homogenisation ... AI tools can only produce the most likely 'next token' but while that may be novel or impressive for the first few times, at scale it just creates sameness. Those who don't use it will stand out.
Authorship ... More than just the formal stuff in our Research Integrity policies. SHAPE research is vocational. Our words are the outward for of our intellectual life. That takes more than accepting a suggestion.
Dependency ... As our ability to merely meet expectations begins to depend on a technology provided by these companies who do not have academic interests at heart (or perhaps even understand them, mainly being university drop outs), the more we lose our autonomy
The next generation ... Where will they come from if we don't have a pipeline?
What should we do?
SHAPE researchers should be the people who shape (!) the future, not passive recipients of a vision dreamt up in Silicon Valley.
Let me be clear. We shouldn't reject the technology of ML/DL, Language Models, NLU/NLP etc. We should reject a specific project, the bigger is better theory, exemplified by ChatGPT, Gemini, Copilot etc. trying to become entirely general purpose tools capable of doing anything a person (or some idealised aggregate of all people) can.
SHAPE researchers, and the British Academy in particular, need to stand behind the Royal Academy of Engineering's call for 'frugal computing', or what we might call:
Cautious, critical, morally principled development and deployment of AI