Mo Bitar


Notes to self. Working on Standard Notes, a simple and private notes app. @bitario Thank Guestbook

What happens when an AI learns to read?

There's something old-fashioned about trying to predict the future. I get a little uneasy when someone says "if it's like this now, imagine what it'll be like 10 years from now!" I feel a sense of robbery happening on the part of the future. A modern person attempting to predict the future conjures fantasies and prophecies as quaint as a first century prophet. Although I too can't help but let my mind run with seemingly autonomous calculations that assume a future value given a present value, I find it not respectful enough of the complexity of the human system. And were I so keen at this skill anyhow, I'd have made a fortune in the markets.

Predictions of the future are so prevalent as to be quickly forgotten and overrun by their never ending onslaught. By one interpretation, the thousand newspapers that encompass the likes of the New York Times are precisely in this businesses of interpreting present values and assuming their future state. And it's why I feel a sense of wariness when I encounter statements. I'd prefer articles contain more question marks than periods, as that would surely be the true factual nature of any complex situation. Yet rather than asking my permission to install new software, sure-of-themselves statements and predictions feel as though I visited one of those shady websites that immediately begin a download upon the page first loading. It feels dirty.

The most prevalent issue on which we let our mind run unbounded is AI. Can you imagine how smart algorithms will be if they're this smart now? Ah, the human and their unrelenting thirst for exponential growth. Of course, we have no reason to be anything other than optimistic. Just look at how quickly we went from brick-size satellite phones to edgeless "retina" displays. So sure, one way to interpret this would be that we'll have actual retina implants in twenty years if we continue at this rate.

But what of the respect for limits? For miscalculations? For failure, bankruptcy, and politics? What of the respect for the complexity of biological organisms? I could just as easily imagine a future in which we come to realize that perhaps machines are not as capable of self-learning as we thought. We've been riding under the cool assumption that computers can do things faster than humans can, so if an AI learns to read and understand what they read, then they can theoretically read all the books ever written in a single second, and boom—there goes the singularity.

But when have we ever been right about predicting the future? What if the human algorithm turns out to be a slow one, with no physical capacity for performance increase? Yes, a computer can do things a trillion times a second. But in that time they calculate nothing more impressive than the location of an item in a database, or the weight of a neural node. A single Google search consumes 0.3Wh of electricity. I saw an Alexa commercial recently where a lady wakes up from her sleep in the middle of the night after hearing a startling sound, and wastes no time in asking her intelligent AI assistant "Alexa, what the fuck time is it?" Nice. Surely, no short of a billion calculations must have occurred for Alexa to give this helpless human the time. Less than a second of computation time, sure, but still, at least some 300ms.

So what does this technology at scale really look like? An AI that one day snaps into consciousness and assumes all human knowledge in a fraction of a second? Or more like a cryptocurrency network that must balance computational complexity with convenience and accessibility? If I had to let my mind wander, I'd assume the future plays us all, and takes on some shocking twist of realizing some human-brain-speed-limit for computations of any medium. We'll build an AI so advanced that it can read and understand with unprecedented accuracy, but still take two days—a full 48 hour's worth—of computation time to read a full book, faring no better than a high school student, and alas, postponing the human fetish for looming singularities.

It took Elon Musk billions of dollars and several years of attempting to build car-making robots before admitting that humans are underrated, and assumed an updated stance involving higher human collaboration in the process. And yet if you do find yourself in one of those Teslas and happen to turn on Autopilot going 80mph on the highway, the folks at Tesla like to remind you: never take your hands off the wheel.

You'll only receive email when Mo Bitar publishes a new post

More from Mo Bitar: