# Is there really an analogy between genAI and the spreadsheet?

March 15, 2024ā¢530 words

The ever-interesting Tim Harford has written a blog post about learning from past game-changing technologies: What the birth of the spreadsheet teaches us about generative AI. Neither he nor I are specialists in the sociology of science and technology, so this feels a bit like two bald men arguing over a comb, but I do think he has missed some really important disanalogies, and by doing so, bought into #AIHype.

He concludes:

When a tool is ubiquitous, and convenient, we kludge our way through without really understanding what the tool is doing or why. And that, as a parallel for generative AI, is alarmingly on the nose.

Well, yes, but there is a difference. Consider his examples of spreadsheet 'kludge':

They are endlessly misused by people who are not accountants and are not using the careful error-checking protocols built into accountancy for centuries. Famous economists using Excel simply failed to select the right cells for analysis. An investment bank used the wrong formula in a risk calculation, accidentally doubling the level of allowable risk-taking. Biologists have been typing the names of genes, only to have Excel autocorrect those names into dates.

In all these cases, the errors come from lacking knowledge of the clear and explicit rules built in to how spreadsheets work. That knowledge is there to be had and if used correctly, the spreadsheet will produce the intended result reliably, consistently and repeatably. **That** is why they are so useful!

When you type '=SUM' into a cell in Excel, it will 'autocomplete' the formula for you. But it tells you which formula and you can spot that it isn't what you intended. If instead the autocomplete of '=SUM' wasn't a formula but a number, and there was no way (apart from getting your calculator out!) of checking which cells had been summed, then that function would lead to disaster. But that is where we are now with genAI.

Perhaps the project of 'Explainable AI' (XAI) will solve this problem, but I very much doubt it because that is not how genAI works. If you ask it to answer a maths problem, it cannot do it in the way a spreadsheet or a human does, by **calculating** the answer. It has no mathematical functions. Rather it works out the statistically most likely correct answer by looking at lots of solved maths problems. With enough training on enough (correct!) problems, it may get the answer right as often as a fallible human being with a pocket calculator, but **it is never getting it right for the right reason**. In contrast, spreadsheets are so useful because when they get it right, they do get it right for the right reason, namely that they have correctly **calculated** the answer.

I am going to shout this: **PREDICTION IS NOT - AND NEVER CAN BE - CALCULATION**

What we can learn from the 'birth of the spreadsheet' is that genAI isn't doing what it appears to be doing and if we fall for the hype and let it replace thinking and working things out, we have given up on the search for truth and become a species which treats every decision as a bet.