July 12, 2021•531 words
The term "AI" is one of those words so well-worn by corporate marketing departments that all the meaning has been worn out of it. When it is used today it is to describe things which certainly do not match its original meaning (machines which could think, or at least demonstrate signs of intelligence); they are not machines which can think - they are machines which can crudely, repeatedly, and quickly crunch large amounts of data and find millions (or more) conditional probabilities from which it makes new predictions about what is likely to happen in the future. Effective? Yes. Thinking? No.
It's not exactly a co-incidence that this has happened; I don't have numbers on this to hand but I'm certain that the product labelled as an "AI-powered car saving lives" sells better than the "mathematical mystery tour of conditional probabilities in a box on wheels responsible for not killing you;" the "AI-enhanced camera" is probably more popular than the "over-exposed camera that will mash up your photos in otherwise interesting ways;" perhaps in more sinister fashion, the "AI-powered border safe-keeping technology" is more palatable to most than "a bunch of cameras and barbed wire intended to keep refugees out."
"AI" has become an anodyne term that does more to obscure than it does to illuminate and explain. This should probably be the first observation that any "critical AI" scholar makes – at least some consideration of what the term means, where it came from, and what it is used for. Blindly accepting the term "AI" as given, and using it as the basis for analysis of modern data processing and storage systems, benefits those who use the term to obscure what they are doing, or (worse) as euphemism for the horrible consequences that their software systems can bear.
As such, the notion of the "critical AI" scholar feels like an oxymoron. Describing mass data collection and processing as "AI" is problematic if you want to carry out "critical" analysis; if machine systems are "intelligent" then this suggests that their output should not be subjected to the same level of scrutiny as we would subject other industrial systems to (such as planes, chemical refineries, nuclear reactors, etc); if we repackage what people presently consider ethically repulsive as something new by suggesting that that we are on the frontier of some brave new world brought into being by technology – which is usually not true (even if the current techniques are new, what they do is in effect what computer systems have always been intended to do – process large volumes of information in order to derive usable output) – then that is effectively a form of deception; conflating data processing with the term "AI" used to mean, and has now been renamed "GAI" (general artificial intelligence) is fundamentally not a critical analysis, because it has neglected the fact that language has meaning and shapes how we think!
In its present usage, the term AI conflates different ideas and elides meaning. It carries unspoken assumptions. Surely the purpose of scholarship should be to make things clearer and to improve our understanding. Accepting the term "AI" without considering the ramifications of doing this is antithetical to this.