The Cyborg Thinks: Preliminary Notes

[This is a series of preliminary notes, a beginning toward a somewhat larger project of mine. How big it is, as of yet, I'm not sure yet. Enjoy the ride.]

the ideas men

Finally, the most shameful moment came when computer science, marketing, design, and advertising, all the disciplines of communication, seized hold of the word concept itself and said: "This is our concern, we are the creative ones, we are the ideas men! We are the friends of the concept, we put it in our computers."

  • What is Philosophy?, Gilles Deleuze and Felix Guattari

What is the new? Is it what presents itself as new? Artificial intelligence presents itself as the new. It's the neophilia of the technology industry, for all its rationality, animated by its own peculiar animal spirits. Yes, GPT-3, ChatGPT, Sydney and that elusive harbinger of the future, GPT-4, are all new. And for that... Does it not seem that every revolution is not necessarily a break with the past, but an intensified repetition of what already went beforehand? Christianity became Christianity in its atheism, for one. Perhaps one day, with the clarity of history, we might see the various political revolutionaries and those who they toppled blurring together. So we have the AI revolution as of yet.

Thesis: The current wave of artificial intelligence is not a break with the "system", the contemporary world, but rather the intensification of trends that already exist.

This does not mean that there has been no change, that we are still living in 2019 or 2015 or 2010. Quite the opposite! But this change is certainly not a qualitative change, precisely because disruption is that which is internal to capitalism itself. Look at the visual arts, with Stable Diffusion. Is this not what popular art has been tending toward, being reduced to an illustration divested of all "higher" or "aesthetic" pretensions?

ChatGPT can replicate the sort of flattened-out writing precisely because this mediocre gulp is what fills the Internet. The Buzzfeed quizzes of yesteryear might already have been written by ChatGPT, in their mindless stupidity. They were inadvertently designed from the get-go for future automation. "The product would be the same — the only difference would lie in the means. In this sense, capital has already appropriated all these sectors of human industry, whether it be news, academia, TV shows, or young adult fiction. The interests of capital that select for this or that human-produced content are aligned with those that an Artificial Intelligence would itself produce. The difference is not in the result, but in whether a human or a machine made it. Clickbait-driven outrage and conspiracy theories could be produced by humans, but all the same, an algorithm could engineer them to maximize internet traffic." (Ulysse Carrière)

The market might decode with one hand but it recodes with another. (See Deleuze and Guattari's Anti-Oedipus)

Bernard Steigler has written on what he calls the proletarianisation of thought. There have already been numerous think-pieces on the unoriginality of ChatGPT, its banality, and how it would never replicate the human mind, the authors lapsing into all sorts of quasi-Romantic metaphysics about the human subject. What they miss is how human thought has already become ChatGPT-esque. Already impoverished, already proletarian. LLMs will only accelerate this existing trend. The LLM is caused already by this proletarianisation, made under its auspices, as it were, and is given the space to accelerate this proletarianisation.

It is, then, of little surprise, that there has been some chatter on the usage of GPT, of LLMs as prostheses to the human mind. Of all the ideas men who have written on it, it is NicholasKees and Janus (from here on, N&J), who have written on cyborgism with GPT. They are the most original of the bunch, and portents of our post-GPT-4 future. Their purpose is the project of AI alignment, but it will not stop there, certainly not.

The cyborgization of thought presents particular challenges to thought. Can the cyborg think? What is the relation between technē and the thinker?

frames and framing

The cut ups can be applied to other fields than writing. Doctor Neuman [sic] in his Theory of Games and Economic Behavior introduces the cut up method of random action into game and military strategy: assume that the worst has happened and act accordingly. If your strategy is at some point determined ... by random factor your opponent will gain no advantage from knowing your strategy since he can not predict the move. The cut up method could be used to advantage in processing scientific data. How many discoveries have been made by accident? We can not produce accidents to order. The cut ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second rate product when you can have the best. And the best is there for all. "Poetry is for everyone" . . .

  • William Burroughs, The Cut Up Method

Generations have used such fundamentally aleatory methods, as Burroughs has discussed, as a stimulus to thought. It's the production of entropy that is at stake here, the artificial production of serendipities and juxtapositions, to force patterns that might be of value. Burroughsian cut-up is only one of such strategies - one can also mention oblique strategies, the tarot, bibliomancy, or even dice (The Dice Man). Burroughs even played around with computers and recording technology for a little bit, with Ian Sommerville, but I wonder if ChatGPT might give him pause.

Anyway, there's a cut, a break between ChatGPT and the rest (GPT-2 and GPT-3, together with Char-RNN, are on the wrong side of this divide. But of course, even the fact of this divide is not absolute, but...) The novelty of modern GPT technology is how overdetermined it is, based on its training data. Burroughsian cut-up on love songs creates odd juxtapositions and turns of phrase, that might trigger your thinking in a particular direction (see the lyrics of David Bowie, a disciple of Burroughs) GPT-2 might be able to drag it into strangely deranged verse, but it is ChatGPT that can replicate the conventions of the love song to an acceptably mediocre fashion. It is here that we begin to vary from N&J, in their description of GPT's superpowers. High variance thought? Something sounds off. Sam Kriss has already written on the declining creativity of LLM technology. But that's the RLHF, I suppose, putting the LLM in a straightjacket. That's true, and non-RLHF models aren't too bad at creativity. But the fact of these limitations does reduce one's enthusiasm for LLMs' divergence a tad bit.

N&J continue to discuss the various superhuman strengths of GPT. Small changes in the prompt can lead you down to divergent branches of GPT output, and simulator's ability to think "from scratch" means that they do not get bogged down in myopic thinking. The machine contains a panoply of hats. (He do the Police in Different Voices) And despite our reservations, earlier, there is a fair bit of variance in GPT output. We can let the simulator run, and let the human direct it a little. A game of AI Dungeon, except to generate essays and books and blog posts, not an RPG, turning play into work. Perhaps we can use this to open up the phase space of thought, open up areas that might not be easily reachable. Send GPT-as-simulator into the wastes. To go where no man has gone before, essentially. "What we are calling a “cyborg” is a human-in-the-loop process where the human operates GPT with the benefit of specialized tools, has deep intuitions about its behaviour, and can make predictions about it on some level such that those tools extend human agency rather than replace it." A lot of GPT's superpowers, we notice, might have been available already in its intellectual ancestors - didn't David Bowie or Brian Eno, Ludwig Wittgenstein or William Burroughs send themselves flying toward the limits of their work? What GPT can do is exactly that, but with more focus. Cut-ups and funny cards might be fine for musicians and for writers, but it wouldn't, well, have the brainpower for AI alignment. (Eliezer Yudkowsky staying up at midnight, feverishly fumbling with a pair of scissors, a pot of glue, and a hundred pieces of paper. The utility function for Friendly AI is somewhere here...). They are right in that it would be useful, and help productivity... but what is the agency of the human here? On one level, they are not beholden to the fragmented personalities locked inside LLMs, Luigi or Waluigi. But the overdetermination of GPT's output makes one wonder how "free" the thinking of GPT is. How much can this simulator traverse the search space of thought, and how much agency does the human in the loop have, anyway? I have discussed earlier the rising wave of stupidity that has made a clearing for the birth of our AI, our artificially mediated mediocrity. I'm interested in the agency of the thinker in the loop, not necessarily from the perspective of superhuman AI, but from that of thought itself - how free is thought in this human-GPT matrix?

Lumpen Space Princeps (@lumpenspace on Twitter - they're great, by the way!) has mentioned that GPT can be thought of as a collection of Akashic records. A subset of the "set of all possible books, stories, conversations, that mankind could have or will ever write or utter", one that is searchable. Your prompt begins the story. "The completion is taken from one of the books which contain your initial phrase." The process might bring up all sorts of characters, called up and down like daemons, as long as the correspondences of the prompt fit. What are the limitations of the search? We might imagine a large phase space of possible texts, and from that, we might imagine GPT as being able to traverse this space. A simulator that can push into the depths of this space, and come back with high-variance thoughts, all sorts of multiversal divergent chains. "From the model’s perspective, the trajectory currently being generated has already happened, and it is just trying to make accurate predictions about that trajectory. For this reason, it will never deliberately “steer” the trajectory in one direction or another by giving a less accurate prediction, it just models the natural structure of the data it was trained on." (N&J, emphasis mine) The sort of topology of the space of text, and the space of the statistical model itself, is something that has not at all been touched on. N&J represent it using lines on a plane, which misses the nature of the spatiality involved. (The mathematician Gilles Châtelet's Schellingian critique of the concept of space, of spatiality, before Hermann Grassmann's work on linear algebra in the Ausdehnungslehre is relevant here. See Figuring Space, pg. 101-106.)

The nature of style is a useful model for looking at the novelty of LLMs. (Actually, it would seem that revolutions, new advances, always involve changes in style. Grassmann, Argand, and Einstein all present new styles of doing mathematics and physics.) Literary style is a good proxy. Thought experiment: would GPT-3, GPT-4, or any future GPT-n, trained exclusively on pre-1900 text, be capable of writing the great modernist classics of the 20th century, eg. Joyce's Ulysses, Proust's À la recherche du temps perdu, Eliot's The Waste Land? Remember that AI/GPT maximalists make the claim that GPT will be able to write better than human writers, and that it can surpass human writers. It really does excel at the task of writing after a model, a particular frame. What surprises me, on reading the first volume, Swann's Way, of Proust's La Recherche is how strikingly improbable it is, even for a post-Proustian language model, or post-Proustian human writers. "What we have here is in fact a clearly defined pathological case." (Jacques Normand) But there it is, a work that breaks with the statistical and other regularities of literature before it.

And if we are to think of art as poetic techne, we will not be too far off the mark either; but the thesis that Artificial Intelligence, as the product of poetic techne, operates the same thing as art, would further demand that we think art as the reproduction of a model. This is eminently true of derivative art; but what of the art that brings something into existence which previously did not exist — what of the art that does not extrapolate from previous data? Truly, the art in question concerns a minuscule fraction of the overall artistic production of humanity. But it is the art that counts; and the art that Artificial Intelligence cannot produce. There would then be, in art, two general strands: an art of the model, and an art without a model. A non-derivative art, having no model, can however constitute itself as a model, and this is the operation of classicism. Raphael provides an example of such art. There is nothing like Raphael before Raphael; and there are centuries of Raphaelites after him. But there is also Titian. No one paints like him in 1520, and he has no followers upon his death in 1576. He has no model, and he does not make his art into a model. And yet Rembrandt will understand him, and Turner also; not as a model, but as a possibility for something else. It’s not a matter of finding a model, but one of creating something new and previously unthinkable, and Titian allows Rembrandt to do just that. Here, art operates as an output that one plugs onto, to take it further.

  • Ulysse Carrière, 'Technically Man Dwells upon this Earth: The Work of Art in the Age of its Automated Production'

Now, it does not mean that GPT-1900 cannot produce Proust, Joyce, or Burroughs. But I would wager that to coax it into exploring that region of space, one would have to have Joyce or Proust in mind already, or would have to be Joyce and Proust. The similarities between prompt engineering and Platonic anamnesis is a heretofore under-explored area. We can only really draw out from an LLM what it already has in its training data, or what we already have in mind in some form. How then, can we get it reliably to produce something new and interesting? We could turn up the temperature of the model, and get it to produce all sorts of low-probability gibberish, but for that, we might have gone back to Burroughs's scissors and glue, and anyway, it would not produce a Proust or a Mann or a Joyce. So much for the creativity of the machine. We begin with hard-headed facts and end up in Socratic aporia. That it cannot think without a model... It does not mean that it is because there is some spark of the soul in the human, which is foreclosed for the robot. But there does seem to be some sort of limitations on the cognition of LLMs, especially when it comes to the new, or the novel, which human minds are rich enough to deal with. This is a problem of the frames and framing of thought-space, which is identified with text-space, identified with the space that the LLM traverses. It would not be much of an issue if the levelled-out mediocrity of the LLM were not the dominant form of discourse, anyway, determined by social, economic and political factors. (To blame the Algorithm for the culture war is ... right, sure, but the trend toward the culture war has already been there, and the situation of the culture war was the paradigm in which the Algorithm was born.)

the extended stupidity thesis

What does it mean to be a mental prosthesis? Chalmers and Clark provide the standard tack of the argument when they discuss the use of a notebook as a prosthesis for memory. Chalmers and Clark offer the example of an Alzheimer's patient who has to write down his directions in a notebook to compensate for his memory, and compare him to someone else who does not have Alzheimer's, and can recall her directions in her memory. Why am I discussing Chalmers and Clark? They speak for a paradigm, a particular way of framing, that is crucial to discussions of transhumanism, and cyborgization.

Clark and Chalmers have stumbled onto something correct, in that yes, cognition can be thought of as extended, but the way they think of cognition and its extension is flawed. Chalmers and Clark's sleight-of-hand is audacious, in their equivocation, and in that process, they erase what exactly it is that they have done. I mention them only because they are emblematic of this particular way in which cognition and prosthesis are approached. The difference between "memory" as a particular process in the brain, with all the messiness, that it entails, with various neural modules interfacing with each other, and which presents itself, phenomenologically, in particular, ways (as studied by eg. Husserl) on one hand, and "memory" as the abstracted model of psychology in which it is a neutral store of symbolic information, indiscriminately applied to brains, notebooks, computer "memory" is collapsed onto the side of the latter.

This, of course, misses out on the qualitative difference between the two cases. We might note that one who uses a notebook has a particular orientation toward the world that the non-writer does not have, a "pre-theoretical" approach that cannot be reduced to the level of manifest beliefs or symbolic mental contents. From a phenomenological perspective, the notebook-as-store-of-information solicits particular attitudes, or behaviours, from the writer, and it interacts with other equipment in their life in a way that the non-writer does not tackle. We can even take a more scientific-materialist approach and note that the brains of the writer and the non-writer would be different, the brain of the former being rewired in a way affected by the notebook. Of course, we can discuss the "extended" nature of the notebook, but certainly not in the sense of Clark and Chalmers, in that the notebook is not an externalization of an (abstracted) faculty of memory. What we have is Whitehead's fallacy of misplaced concreteness in action.

Is that so bad? it might be asked. Regardless of the neural or phenomenological correlates of note-taking, is it not at least useful in some aspect... does it not open up new ways of thinking, or acting? Geometry, for one, would be impoverished without writing. The issue is rather of the reductive flattening associated with the thought of the prosthesis, which "externalizes" an impoverished human faculty onto the tool, the machine, and then this is projected back onto the human. This has nothing to do with some special anthropocentric inner light that belongs to humans and humans alone that cannot be stultified by machines. By all means, machines occupy their unique niches and their own modes of existence, and it is not a priori impossible that there might be an artificially intelligent thought. But what is happening rather is not a trajectory that would lead us toward thought but rather to an artificial stupidity.

In fact, the issue with memory now is not just that is thought of in that reductive way, but that it's slowly been adapted into being this flattened storage of information. We see something similar with intelligence, with the powers and dynamics of the human brain collapsed onto a singular dimension, IQ, the measure of a decoupled "abstract" manipulation of clearly defined mental representations, adapted somewhat to the modern technological-economic system. We now pass on to the transhumanists, and their fetishization of an IQ, "intelligence", in their sincere hope of creating rational homo economicus agents, disembodied and "high-decoupling". Despite its grandiose claims, transhumanism ends up being the fulfilment of anthropocentric humanism, in that instead of opening up a new universe of possibilities, what it does is restricting the possibilities that are already available. Gilles Deleuze and Félix Guattari's comments on drugs can be easily applied to transhumanism. Replace "drug addict" with transhumanist. "[Tranhumanists] continually fall back into what they wanted to escape: a segmentarity all the more rigid for being marginal, a territorialization all the more artificial for being based on chemical substances, hallucinatory forms, and phantasy subjectifications." And. "[Transhumanism does] not guarantee immanence; rather, the immanence of [transhumanism] allows one to forgo them." This is not a rejection of transhumanism per se, of cyborgification, but the method of transhumanism, of Janus-style cyborgification does not necessarily provide the radicality, the novelty that it promises. N&J look to the future, but is this a future that is genuinely open, agentic? (Putting aside "AGI" risk for now.)

Every generation has its model of what the mind is, what the mind is seen as and what it is made to end up spiralling together. The old calumny against Hegel, that he wanted to make the real, rational, is what is being played out here. Originally minds are, on the Cartesian model, running mental representations together, deducing this or that about the world with particular world models. The sort of thing criticised by Dreyfus in his early work against GOFAI. Now it's that humans are statistical models, statistical parrots as Sam Altman puts it. Of course, art is assembled out of what's seen, the relation between the artist and her work is pretty much the same as the relationship between Stable Diffusion and its training set, which collapses the former into the latter. The former is being collapsed into the latter, in practice. This is the problem of technics, of how to properly extend agency without impoverishing it.

A little space is opened up - insofar as the space falls back on the structuring features of the Old, in that whatever that is New passes in by way of the Old.

for the future

A series of notes on what I think should be done, and some of my own future intellectual programs.

  • One thing that has been clear is the amount of actual theorizing that has already been done that has to be joined up to the LLM wave. There is work on technics in Bernard Stiegler and in Gilbert Simondon, especially in Stiegler's analysis of tertiary retention. Bruno Latour's work on Actant-Network Theory (ANT) and his work on the modes of existence look like promising approaches in dealing with theory. The point of theory is not the mode of pure recognition, "Oh, that's what it was... LLMs are just...", locking it up inside a particular theoretical-philosophical box. Rather, it is to raise to the level of the provocation of what is new, to not just declare that GPT falls under this or that passage in Deleuze or Derrida or Serres, but rather to see what can be opened up in each other. The work in "media theory" and related disciplines is also of interest. Friedrich Kittler is famous for his analysis of Nietzsche's use of a typewriter, how the typewriter made him write, and made him think in this way or that. What we need is a Kittler of the LLM.

  • On that note, what we need is an update on the theory that we already have on AI. Hubert Dreyfus is dead, and What Computers Still Can't Do have to be updated for the current era of software. 'Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian' was written in 2007. Reza Negarestani has written Intelligence and Spirit, and the sections I've read so far are stimulating, but I think that the loop has to be closed - Reza might have started off as an engineer, but his current audience isn't really full of engineers, which I think is a shame. Bridges, I'd say, have to be built. Hubert Dreyfus's work has as its background the work of Martin Heidegger and Maurice Merleau-Ponty, and the critiques and extensions of phenomenology by Jacques Derrida, Gilles Deleuze, Alain Badiou should be taken into account.

  • There's a lot of chatter on this, and some of it is quite good and interesting, the discussion of Simulators and Waluigis, for one. There certainly needs to be more discussion as to how these are implemented in a neural network, in a statistical model, at the very least such that it might be able to elucidate in a scientific manner what it might be capable of. What we see instead is the irresponsible throwing around of words like "understand", "intelligent" and so on, the bases of which aren't articulated, and which fall back on a childishly infantile "common sense", which attacks attempts to elucidate what intelligence or understanding is or might be with attacks of derision. They are lost in a farrago of confusion, an orgy of stupidity. This is not the figure of a "rationalist", but the nihilist at the end of the world, his end directed toward a meaningless "disruption", a disruption and radicalism that is coincidentally aligned with capitalism as it already exists (witness the absurd spectacle of effective accelerationism). There needs to be more research on the LLMs - proper, replicable research. ChatGPT's training data, RLHF and weights are all inaccessible - it's tricky reverse-engineering ChatGPT's innards from ChatGPT transcripts. Is this particular response in the training data? Who knows? There have to be, I suppose, toy models that are open and with a "public" database and public weights, and perhaps a way of giving the probability distribution alongside the output so that LLM results can be replicable, and transparent. The ChatGPT/Bing discourse is on the level of memes, of context-less screengrabs. LLMs have been treated as pixie dust for too long. The time comes when we should move past it.

  • Bridges with other disciplines. Chomsky's article aside, we need to be able to connect with experts on the other side of the aisle. It's arrogance to claim that LLMs have "solved" computational linguistics or philosophy. There need to be experts who can get the technical matter of the technology and technologists who are trying to connect with the experts. My own amateur thoughts on linguistics - I wonder what sort of linguistic forms, or codes LLMs are able to replicate, and what sort of training data they need to be able to reproduce them - it might be able to follow the conventions of the letter of condolence, of the police report, of the ... how does it happen? What are the limits of that? Linguistics, psychology...


You'll only receive email when they publish something new.

More from francis kafka
All posts