Will the ironies that plague the demise of print never end? Just as neuroscience arrives to explain how the brain evolved our reading and writing abilities, which took their furthest leap forward with the advent of Gutenberg’s press, the once-stable relationship between discrete book and private reader is being recast by new digital text platforms: Web page, eBook, and iPhone.
What’s more, publishing on paper, linear thinking, literary hierarchies, metanarrative legitimacy, not to mention the humanist claims of literacy and democracy, all are being remade. Only five hundred years into movable type and the Enlightenment/Romantic/Modern culture it begat—and suddenly we are blinded by how brief our dwelling in the kingdom of print will be.
One way to situate this monumental change is to understand reading from the brain’s perspective. Two current books stand out: Maryanne Wolf’s Proust and the Squid: The Story and Science of the Reading Brain and Stanislas Dehaene’s Reading in the Brain: The New Science of How We Read. According to Dehaene, director of the Cognitive Neuroimaging Unit in Saclay, France, our brains were built to read, but each person has to learn how. Reading is not innate; it’s cognitive.
What’s more, humans have just begun to read and write. “The invention of reading,” he says, “is far too recent for our genome to have adapted to it.” Wolf, a professor of child development at Tufts University, notes that the brain integrates a series of component functions—geared for visual, auditory, and associative activity—that “light up,” or purposefully integrate, during the act of reading. A brain reading is a collage of active substrates and multi-level neuronal fireworks.
Dehaene, who is more philosophically attuned than Wolf—though Wolf is the more elegant author—unravels the chief mystery: how our brains combined spoken language with written symbols and evolved the inscription of words “to fit” our cerebral cortex. “How did humans discover,” he asks, “that their visual cortex could be turned into a text comprehension device?”
You can almost hear the answer in the question. Five thousand years ago, our brains began to shape inscription with expressive constructions like cuneiform, hieroglyphics, and Chinese ideograms. With us as toolmakers, the brain regularized these inventions, eventually universalizing them into alphabets. All this was done to (a) simplify eye-text recognition; (b) coordinate the brain’s many paths of perception, all highly participatory; and (c) push us to write, or to contemplate ideas more conscientiously for ourselves and, in the process, develop original thought.
The brain has not yet programmed reading into the species because both reading and the species are still in their childhood, in largely “modifiable” states. As our culture invents new platforms with which to read and process information, the brain, Dehaene notes, will adapt its “ancient neuronal circuits” to “new cultural objects, selected because they are useful to humans and stable enough to proliferate from brain to brain.” The brain absorbs the iPad and its protocols in a matter of weeks or months. Happily for Apple, no change in the genome is required.
Brain and text make a chummy pair. When we read words we know, we get them in a flash. Adding a new word to an invariable syntax (poetry is often an exception) may slow us down, but we adapt: We say a new word aloud, look it up, or understand it in context. By contrast, our senses (the lizard or pre-literate brain) are innate. Every generation does not “learn to smell,” but every generation does learn to read—from scratch. Or, as Noam Chomsky puts it, we “grow” into language.
There is also evidence that reading evolved to engender physical/mental pleasure in the brain. It embraced the letter’s symbolic nature, which over time humans simplified from pictographs to letters, making reading comprehensive and economical. Our twenty-six-letter phonetic alphabet (start date: Greece, 750 B.C.E.) allows for millions of combinations, though its basic representations are mere “fragments of sound and meaning.”
The alphabet’s mosaic cast reminds us that reading is experiential, an interplay: From the fragments of text-sound we encounter, we “defragment,” that is, reorganize and recombine, ideas, images, and emotions. Every time the brain connects the visual cortex to other regions where text stirs sense and memory, neuronal activity doubles or triples. Reading brings new thought, and new thought, it is believed, heightens consciousness and expands the mind. That’s why the bookish are “brainy.”
Wolf’s and Dehaene’s studies offer three core postulates about the brain and language. First, the brain’s organizational efficiency determined what we would read. Dehaene describes how script is composed of simple two-, three-, and four-lined shapes, which symbolize sound and image (S is snake-like and sounds itself in shhh or shiver). Alphabets are highly adaptive; they are easy to process and recall because letters are few, their sounds limited and distinct. Dehaene calls the brain’s specialized region that recognizes writing the “letterbox.” The letterbox, located in the left hemisphere, or “the seat of language,” organizes and disperses the bits of what we read to the temporal and frontal lobes where sound and meaning are encoded. There the text is deciphered, which—if it wasn’t written by Jacques Derrida—happens in one-fifth of a second.
Second, the brain integrated several brain functions to facilitate high levels of comprehension. An fMRI scan of a reader reveals visual, auditory, syntactic, and semantic areas alive with activity. Text recognition is electric, energetic, holistic, whether the content is “She sells seashells down by the seashore” or Wittgenstein’s gem, “Philosophy is a battle against the bewitchment of our intelligence by means of language.” Words and concepts must make sense, and sense, as Dehaene puts it, “requires multiple cerebral systems to agree on an unambiguous interpretation of [the] visual input.” The fact that English contains “bear” as noun and verb, as animal and burden, is proof that the brain is encoded to receive, parse, and solve language conundrums.
Third, reading and writing combine with personal memory in the brain to produce the most lasting effect: meaning. Here’s a topic neuroscientists have only begun to explore. Meaning involves collaboration between text and reader, between reader and memory. I admire Wolf’s assertion that “the secret at the heart of reading [is] the time it frees for the brain” to develop deep thoughts; but in her study I see neither proof (how might this be tested?) nor adequate discussion that deep thought originates from the exchange between reading and contemplation. I do agree reading intensifies our engagement with the world. As a nonfiction writer, I bounce from reading others’ texts to creating my own. This bi-lateral movement is the essence of literacy. I can’t imagine one without the other.
But meaning takes time; its voyages are global. Like the brain’s plasticity, meaning is highly modifiable. Poems, paintings, films, music—all strike us differently as individuals and at different times in our lives. Meaning crosses and integrates brain functions, stimulating uncertainties language loves to field but brain science seems ill-equipped to study: How might science account for wit, irony, black humor, postmodernism in the brain? Such tropes bypass the limbic system, the pathways of emotion and memory, and register confusion. But on second thought, the tropes pass through the limbic and steam the pots of ambiguity. In “The Idea of Order in Key West,” Wallace Stevens’ recondite language sidles its way through our doubt, our curiosity, our love of wordplay and revelation, a course as much emotional as intellectual.
To date, neuroscientists have scanned the brains only of text/book readers, those raised on stable forms like the Bible or the short story. How will new text-digitizing devices, the big cloud of hypertext, and the “field” display of Web pages and phone screens reinvent reading? Will screens alter content, redirect meaning? Will they, despite allowing more people to read, make reading (and us) superficial? And if we don’t read as immersively as we did before, does that mean we’ll be less “brainy”?
This is the noisy debate we get from Nicholas Carr and Kevin Kelly, a pair of cultural and lexical polemicists. Carr’s overview, The Shallows: What the Internet Is Doing to Our Brains, a book about the Net and its distractions, is rife with worry: “Over the last few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory.” That something is his reading composure, dislocated by the Net just in the last three years.
Online today, Carr can’t sit still—the Internet’s disruptiveness begets suffering. “The deep reading that used to come naturally has become a struggle.” Ditto for most everyone online, he says. As a knowledge base, print has fallen into fourth place, behind TV, computer, and radio. Carr cites colleagues who have abandoned books and long articles entirely in favor of blips of text, or text sculpted into acoustic and visual space. Some believe this a boon: one former magazine writer says the Net has made him smarter: “More connections to documents, artifacts, and people mean more external influences on my thinking and thus on my writing.”
Reading a Web page is unlike reading print. Call the former “digital immersion,” which drops linearity in favor of diversion as readers scan and scroll over one page and skip to other pages. Such users are online hunters, not rapt readers. Studies show that only one-sixth of the text on a Web page is actually read. Carr says the Net’s cognitive meddling and rewiring is making us stupider, and his scientific evidence is impressive. (Though he doesn’t seem any stupider for writing his book, despite being a good example of an online hunter.)
The Shallows’ major flaw is that Carr does not distinguish types of readers: The devourer of books is the only type offered. There’s little—from any of these books—about the different goals readers have when they approach text and hypertext; indeed, many digital enthusiasts are becalmed by the Web’s total access. I, for one, am glad that the dizzying microfiche reader has gone the way of the dodo.
Here are three easily deflatable assumptions about the Reader: that a person becomes a “deep” reader only via “high” literature; that an individual’s online and offline reading are measurably equivalent; and that textual immersion leads to intelligence. The knowingness of athletes, jazz musicians, and birders are wildly different from one another—and none relies much on the skill of reading.
For Kelly, a founding editor of Wired, interacting with screens (already some 4.5 billion are up) will, like the printing press, change the way we read and write for the better. The “interconnected cool, thin displays” of screens have “launched an epidemic of writing that continues to swell.” Since Web pages privilege text—still the dominant form of communication, in part because text is easy to store and requires comparatively little bandwidth—they are now proliferating at the rate of several million a day. Screens are “very visual, merging words with moving images.” Though our eyes are taxed, our bodies are engaged. Fingers, hands, voice, brain, all are enlivened by the sensorium of a finely hypertexted page.
Building on the bi-directionality of the Internet, screens will soon “follow our eyes” and attach us to “where we gaze.” The screen and its links will absorb us more—and will absorb more of us—as we scroll, glance, read. What’s more, Kelly states, “books were good at developing a contemplative mind. Screens encourage more utilitarian thinking.” If contemplation recalls deep reading, utilitarian reading is driven, like a Geiger counter, to find the glowing chunks. New modes of reading means our minds are indexing info in real time. The benefit for Kelly: “In books we find a revealed truth; on the screen we assemble our own truth from pieces.”
As yet we don’t know the cognitive differences between a concentrating reader of the page and an indexing reader of the screen. It may be that the traditional written/oral narrative evolved as a structure, which the brain selected for and directed back at its users, in essence, to test the brain’s linear processing ability across multiple cognitive levels. Such forms are not innate; they are the brain’s evolved strategy for stabilizing attention, and publishers’ accustomed way of delivering economically viable structures to audiences.
Carr and Kelly agree on one thing that the academics Wolf and Dehaene mostly avoid: Reading has civilized us. But I’m not so sure I agree with them, especially when I recall those Brahms-loving guards at Auschwitz. Better put, I think our wiring allows just enough plasticity to ensure brain, as opposed to human, progress.
“When a new cultural invention finds its neuronal niche,” Dehaene writes, “it can multiply rapidly and invade an entire human group.” As for civilization, minds will believe anything they read or hear or see because variability, our evolutionary necessity to be altered, is innate; on the other hand, our ability to detect the deceptions of the worst of people is—how shall I put this?—less innate. It seems our so-called civilizing traits are both learned and unlearned.
A final thought: If the brain adapted the inscription of words on tablets into a reading system that we learned with exponential quickness and ever-deepening comprehension, then why won’t the brain adapt the frenzy of the Net into a sensory system that we will learn with equally exponential quickness and ever-deepening comprehension?