What Happened When Computers Learned How to Read

Computers love to read. And it isn’t just fiction before going to bed. They read greedily: all literature, all of the time—novels, encyclopedias, academic articles, private messages, advertisements, love letters, news stories, hate speech, and crime reports—everything written and transmitted, no matter how insignificant.

This ingested printed matter contains the messiness of human wisdom and emotion—the information and disinformation, fact and metaphor. While we were building railroads, fighting wars, and buying shoes online, the machine child went to school.

[time-brightcove not-tgx=”true”]

Literary computers scribble everywhere now in the background, powering search engines, recommendations systems, and customer service chatbots. They flag offensive content on social networks and delete spam from our inboxes. At the hospital, they help convert patient—doctor conversations into insurance billing codes. Sometimes, they alert law enforcement to potential terrorist plots and predict (poorly) the threat of violence on social media. Legal professionals use them to hide or discover evidence of corporate fraud. Students are writing their next school paper with the aid of a smart word processor, capable not just of completing sentences, but generating entire essays on any topic.

In the industrial age, automation came for the shoemaker and the factory—line worker. Today, it has come for the writer, the professor, the physician, and the attorney. All human activity now passes through a computational pipeline—even the sanitation worker transforms effluence into data. Like it or not, we have all become subjects to automation. To survive intact, we must also learn to become part software engineers and part, well—whatever you are doing is great!

Read More: AI and the Rise of Mediocrity

If any of the above comes as a surprise to you, my job, I feel, is mostly done. Curiosity piqued, you will now start noticing literary robots everywhere and join me in pondering their origins. Those not surprised perhaps believe (erroneously!) that these siliconites have learned to chat only recently, somewhere on the fields of computer science or software engineering. I am here to tell you that machines have been getting smarter in this way for centuries, long before computers, advancing on far more arcane grounds, like rhetoric, linguistics, hermeneutics, literary theory, semiotics, and philology.

So that we may hear them speak—to read and understand a vast library of machine texts—I want to introduce several essential ideas underpinning the ordinary magic of literary computers. Hidden deep within the circuitry of everyday devices—yes, even “smart” light bulbs and refrigerators—we will find tiny poems that have yet to name their genre. In this sense, these computers are replete not just with instrumental capacity (to keep food cold or to give light) but with potential for creativity and collaboration.

It’s tempting to ask existential questions about the nature of artificially intelligent things: “How smart are they?” “Do they really ‘think’ or ‘understand’ our language?” “Will they ever—-have they already—become sentient?”

Such questions are impossible to answer (in the way asked), because the very categories of consciousness derive from human experience. To understand alien life forms, we must think in alien ways. And rather than argue about definitions (“Are they smart or not?”), we can begin to describe the ways in which the meaning of intelligence continues to evolve.

Not long ago, one way of appearing smart involved memorizing a bunch of obscure facts—to become a walking encyclopedia. Today, that way of knowing seems like a waste of precious mental space. Vast online databases make effective search habits more important than rote memorization. Intelligence changes. The puzzle of its essence cannot therefore be assembled from sharp, binary attributes, laid out always and everywhere in the same way: “Can machines think: yes or no?” Rather, we can start putting the pieces together contextually, at specific times and places, and from the view of an evolving, shared capacity: “How do they think?” and “How do we think with them?” and “How does that change the meaning of thinking?”

In answering the “how” questions, we can discover a strange sort of a twinned history, spanning the arts and sciences. Humans have been thinking in this way—with and through machines—for centuries, just as they have been thinking with and through us. The mind, hand, and tool move at once, in unison. But the way we train minds, hands, or tools treats them almost like entirely separate appendages, located in different buildings, in unrelated fields on a university campus. Such an educational model isolates ends from means and means from ends, disempowering its publics. Instead, I would like to imagine an alternative, more integrated curriculum, offered to poets and engineers alike—bound, eventually, for a machine reader as part of another training corpus.

Next time you pick up a “smart” device, like a book or a phone, pause mid-use to reflect on your body posture. You are watching a video or writing an email perhaps. The mind moves, requiring mental prowess like perception and interpretation. But the hand moves, too, animating body in concert with technology. Pay attention to the posture of the intellect—the way your head tilts, the movement of individual fingers, pressing buttons or pointing in particular ways. Feel the glass of the screen, the roughness of paper. Leaf and swipe. Such physical rituals—incantations manifesting thought, body, and tool—bring forth the artifice of intellect. The whole thing is “it.” And that’s already kind of the point: Thinking happens in the mind, by the hand, with a tool—and, by extension, with the help of others. Thought moves by mental powers, alongside the physical, the instrumental, and the social.

What separates natural from artificial forces in that chain? Does natural intelligence end where I think something to myself, silently, alone? How about using a notebook or calling a friend for advice? What about going to the library or consulting an encyclopedia? Or in conversation with a machine? None of the boundaries seem convincing. Intelligence demands artifice. Webster’s dictionary defines intelligence as the “skilled use of reason.” “Artifice” itself stems from the Latin “ars”, signifying skilled work, and “facere,” meaning “to make.” In other words, artificial intelligence just means “reason + skill.” There are no hard boundaries here—only synergy, between the human mind and its extensions.

What about smart objects? First thing in the morning, I stretch and, at the same time, reach for my phone: to check my schedule, read the news, and bask in the faint glow of kudos, hearts, and likes from various social apps. How did I get into this position? I ask alongside the beetle from Kafka’s The Metamorphosis. Who taught me to move like this?

It wasn’t planned, really. Nor are we actually beetles living in our natural habitat, an ancient forest floor. Our intimate rituals morph organically in response to a changing environment. We dwell inside crafted spaces, containing the designs for a purposeful living. The room says, “Eat here, sleep there”; the bed, “Lie on me this way”; the screen, “Hold me like this.” Smart objects further change in response to our inputs. To do that, they must be able to communicate: to contain a layer of written instructions. Somewhere in the linkage between the tap of my finger and the responding pixel on—screen, an algorithm has registered my preference for a morning routine. I am the input and output: the tools evolve as they transform me in return. And so, I go back to bed.

Excerpted from Literary Theory for Robots: How Computers Learned to Write. Copyright 2024 by Dennis Yi Tenen. Used with permission of the publisher, W.W. Norton & Company, Inc. All rights reserved.

Leave a Comment