LLMs, better writing, and cognitive diversity
November 25, 2023•546 words
I have previously commented that it is easy to detect a student using an LLM to generate text in an essay because LLMs write better English: grammatical, with sophisticated syntax and clear stucture. Elon Musk said something similar in his interview with Rishi Sunak.
We need to challenge this idea, not because it is false but because it reveals we accept a norm of 'better writing' which perhaps we should reject. One way of seeing this is to note that the literary style of LLMs is that of neurotypical, socially elite humans.
Some people have commented that this creates a 'levelling up' opportunity for the neurodiverse and socially disadvantaged. If by that they mean that people could use LLMs to 'improve' their writing, then perhaps they have missed a genuine levelling up opportunity.
A historical comparison
For most of the 20th century, broadcast media - radio and television - required presenters to speak in a certain style, with a specific accent, vocabulary and approximating written grammar. This was thought to lend authority and seriousness because it was the 'natural' speaking style of people who had received elite educations and occupied positions of power in society. As such, the style of speech was a proxy marker for a level of education and epistemic authority.
However, reforms to the education system, in particular the massive expansion of Higher Education, resulted in a generation of talented broadcast media presenters who did not speak like that. Despite initial resistance from the forces of reaction, it soon became normal to hear a variety of speaking styles. Here we saw a genuine levelling up where ability trumped background.
But suppose that current generative AIs had existed in in 1990s, might someone have suggested that we could 'level up' by using those to 'improve' the speech of talented presenters from the 'wrong' background? It doesn't take much thought to realise what a socially and ethically bad use of AI that would have been.
Better writing
We face a tipping point in how we think about knowledge and understanding created by these sophisticated LLMs, and the issue of 'improving writing' is a lens to focus our thinking. As a society, we could go one of three ways:
- Encourage people to use LLMs to conform their writing to pre-existing norms of good writing.
- Get LLMs to produce more diverse forms of writing.
- Accept that grammaticality, syntax, articulateness, and structure are only proxy markers for genuine knowledge and understanding.
With speech we did the equivalent of 2), and that resulted in 3). I hope we can agree that 3) is a desirable end state. Whether 2) is a good theory of change for reaching that end state seems unlikely. For a start, the private companies who own these LLMs have no interest in positive social change and are happy to reinforce existing power structures in society (which their owners already benefit from). But there is also the problem that - unlike broadcast media - LLMs produce content in response to individual user requests. And part of that request can be a literary style. So until users are tolerant of diverse writing styles, they will be put off receiving output in a style other than they have requested or other than an expected default.