Most of the questions that engage media researchers and popular observers of the media focus only on one dimension of our media environment: the content of media messages. Typical concerns center on how people (often children) react to what they are exposed to through various media; how institutional, economic, and political factors influence what is and is not conveyed through media; whether media messages accurately reflect various dimensions of reality; how different audiences interpret the same content differently; and so on. These are all very significant concerns, but content issues do not exhaust the universe of questions that could and should be asked about the media.
Joshua Meyrowitz, “Medium Theory”
Joshua Braun: Welcome to Media, Technology & Culture. This is the last in our series of installments on what makes new media new. That clip comes from professor Joshua Meyrowitz’ essay “Medium Theory.” And it reminds us that to truly get a grip on how media technologies intersect with culture, we have to consider more than just the content they carry.
This is a big project. One that’s going to stretch beyond the current series of installments. So far, we’ve explored how elements of the 1960s counterculture shaped the language we use to talk about contemporary digital technologies. And we’ve looked at similarities in the ways people experience and react to emerging forms of media that seem to exist over much larger time scales, and across a surprising array of media technologies. It’s a phenomenon that suggests that at least some of our reactions have less to do with the specifics of each particular technology than with the deeper hopes and concerns of our culture popping up time and again, the way themes from Shakespeare or Gilgamesh do.
So, how do we make sense of new media? How can we guard against our temptation to assume, our implicit sense, even, that everything in our experience of today’s emerging digital media is brand new and unprecedented? And how do we do that while also appreciating the things that really are new or unique to our current cultural context and moment in history? Here’s Meyrowitz again:
A handful of scholars—mostly from fields other than communications, sociology and psychology—have tried to call attention to the potential influences of communication technologies in addition to and apart from the content they convey. I use the singular “medium theory” to describe this research tradition in order to differentiate it from most other “media theory.” Medium theory focuses on the particular characteristics of each individual medium or of each particular type of media. Broadly speaking, medium theorists ask: What are the relatively fixed features of each means of communicating and how do these features make the medium physically, psychologically, and socially different from other media and from face-to-face interaction?
Joshua Meyrowitz, “Medium Theory”
Meyrowitz’ notion of medium theory offers one proposed solution to our problem of evaluating what’s new about new media. Which is to pay attention to the particular combinations of features that make them up. One hint that this is culturally significant is the way new technological products get marketed and reviewed.
Paul du Gay and his coauthors in the book Doing Cultural Studies, for example, point to the way in which the Sony Walkman, upon its introduction in the late 1970s, was sometimes described as a very small and more mobile version of the stereo tape decks people listened to in their homes. In other words, people understood the portable music player as a mashup of features from technologies they were already familiar with. It was like the tape deck they already used, but with the additional qualities of being small and easy to carry with you on the go, not unlike the pocket transistor radios that had been popular since the 1950s. And as du Gay and his coauthors point out, we almost always come to grips with new devices, technologies, and pieces of culture by comparing them to what went before, as mashups of, or departures from, the features of devices we already know. A new device is like Gadget A, but with a few aspects of Gadget B, but it also has feature C which is all new.
In fact in writing this, I had a nice reflective moment when I realized that some of the folks listening may not be old enough to remember what the original Walkman was. I instinctively wanted to explain it as being a portable music player that was sort of like an iPod, only it played cassette tapes not digital files. Which is totally du Gay’s point here. Whether we’re looking forward and figuring out the latest gadget in terms of what went before, or looking backward and calling the Walkman the iPod of the 80s, or talking about the telegraph as the Victorian Internet. We’re lost without comparisons. Spend a few minutes reading about the Walkman’s introduction, incidentally, and you’ll get a tremendous feeling of déjà vu regarding the iPod. The selling points for the original Walkman are nearly identical. Things like the ability to make your own track mixes, take your music with you on the go, and oh, check out these nifty new headphones we designed for it. Apple clearly took more than a few pages from Sony’s playbook in introducing the iPod.
And it’s not like this is even unique to media technologies. Back in the 1990s and early 2000s when people were hyping the hydrogen fuel cell, for example, I can remember reading articles about how technology firms were struggling to make the price and packaging of fuel cells similar to that of gasoline-powered generators. Not just so that they might be more affordable to businesses and homeowners, but so they’d be able to market the things. Engineers knew that if they had to explain what a fuel cell was, they’d have a hard time selling any. But if you could offer it to consumers as a deluxe model generator you could jack into an existing market, the same way Apple did when people began dumping their Walkmans for the new iPod.
One point that emerges in this discussion is that the people and companies that make gadgets have to concern themselves with a lot more than gadgetry. It’s never enough to worry about whether a tool works from a technical standpoint. You also have to think about how people, from end users to government regulators, understand the tool. And that includes everything from how much it costs to whether people will want to be seen with it on a date. The stereotype of engineers who make brilliant technical decisions that turn out to be nightmares for their users is fun, but often misleading. Engineers and designers succeed by balancing many different interests and managing trade-offs that go beyond the strictly technical ones. This impossibility of separating social and technical concerns is what sociologists and historians are talking about when they use the wonky term “sociotechnical systems,” an idea we’ll explore more fully in the coming weeks.
For now, it’s enough to remember the point that media technologies and devices, any technologies and devices for that matter, are cultural products. New gadgets demand comparisons with what went before, and for this reason they play on and with our expectations. To better see what I mean, let’s compare this to our experience of media content.
Take an agreed-upon cultural form like movies, for example. Filmmakers, film critics, and film scholars all love to talk about movie genres. The western, the horror movie, the rom-com, the spy thriller. In the over a century we’ve been watching movies, both filmmakers and audiences have grown very accustomed to the tropes of different sorts of films. The duel between gunfighters or the fistfight aboard a moving train are so well-established in Westerns, for example, that many movies use the same sequence of shots to set up the action, knowing that all it takes is a few quick visual cues for the audience to catch on to what is happening. This allows the director to ratchet up the suspense, and to mess with our heads.
We as audiences have gotten so good at reading the visual shorthand of movie genres, at thinking three steps ahead based on recognizable shots and plot devices, that for decades already, much of filmmaking has been less about establishing these expectations and more about tinkering with them, either by selectively violating established aspects of a genre, or by mixing and matching between different genres altogether. The movie Skyfall, for example, isn’t a Western at all. It even begins in Istanbul, which isn’t exactly cowboy central. But none of that stops us from realizing at the moment James Bond ends up on top of the train that we’re about to see a high-speed fistfight with the bad guy.
When we watch a movie, in other words, we understand it based on comparisons with all the movies we’ve ever seen before. Some of this we do consciously, like when we get an explicit reference or a joke in one movie that relies on our knowledge of another film. But a lot of these comparisons and contrasts we process almost by reflex. We get antsy when the teenager in the horror movie decides to check out the noise in the basement. And if we stop to think about it, we know it’s because this leads to a bad outcome in so many other movies we’ve seen.
But while we’re watching, we may be so caught up in the moment that we’re not quite aware of all the associations we’re drawing on a less-conscious level. All this is, I’d contend, not so different from what’s going on in our experiences of new technologies. When we first encountered the Walkman or the iPod, we may or may not have stopped to consciously compare it to similar devices of the past. But, to understand what we were looking at, we instinctively drew connections to gadgets we knew.
In software design, there’s even a whole interface design strategy called skeumorphism that relies on these sorts of connections, purposefully creating new technologies that draw on your associations with existing gadgets, and particularly with the real-world objects. Think, for example, about your computer’s “desktop” and “trash can.” Or the icon for the mail program on your phone that resembles an envelope or a postage stamp. Think, too, about more subtle allusions to familiar objects. Like the windows on your computer screen that cast shadows on one another like a stack of papers, allowing you to easily see which one is on top. Or the way the ereader app on a tablet turns the next page when you flip the device to one side as you would the page of a book. Think about the fact that ebooks have pages at all. We’re constantly relying on a finely-developed sense of what’s alike and what’s different to find our way around the technological world.
These comparisons often seem unconscious, intuitive. But they rely on a huge backlog of experience, much as our ability to follow film narratives does. It’s a sense that engineers, designers, and marketers are constantly trying to tap into in order to make devices intuitive while at the same time conveying a sense of newness, just as a filmmaker manipulates and occasionally violates our expectations of different movie genres to create suspense, surprise, or humor.
Much of what I’ve described so far about the logic of comparison and reuse comes pretty close to an idea from media scholars Jay David Bolter and Richard Grusin. They painstakingly document the way in which each successive cultural medium draws on forms of media that went before. So, for example, contemporary 3D video games borrow from visual styles developed for television and film, which themselves borrowed styles from painting. Video games create lifelike photorealistic graphics using techniques like linear perspective that were first developed in Renaissance painting, and reworked in successive mediums from the printing press all the way down to Grand Theft Auto and Minecraft. Newspaper websites mimic the layout and feel of print newspapers. And web designers have borrowed all sorts of graphic design conventions from printed materials, some of which date back to stuff manufactured on early printing presses in the 1400s. Things which themselves borrowed design ideas from hand-copied manuscripts. Meanwhile, contemporary television news and printed books are beginning to make use of graphics inspired by web design.
All of which is to say that both content creators and audiences figure out what to do with and how to make sense of a medium by comparing and contrasting it other media they know, and by drawing from the repertoire of techniques and skills used in those other media. This sort of constant remixing, what Bolter and Grusin call “remediation,” fits in well with the ideas we visited earlier from du Gay’s book Doing Cultural Studies. But it’s not quite the same thing. As you may have noted from my description, remediation is mostly about the content of emerging media forms, not the technologies underlying them.
And if there’s one criticism I’ve heard of Bolter and Grusin’s main book on the subject, it’s that it focuses carefully on the similarities that pop up between content in different media, between the look of USA Today’s printed edition and its website, for example, but not nearly as much attention to the processes by which those similarities came to be. In a lot of cases designers and developers, or the managers overseeing their work, may simply have been so steeped in existing media formats that they recreated aspects of them in a new medium without even really thinking about it.
But we can also assume, and other scholars have demonstrated in many cases, that at some point a lot of work went into creating the technologies that could in the first place faithfully recreate older techniques. We don’t just have technologies for putting text online, in other words, we have technologies that put text online in a way that resembles the morning paper. Developers don’t just create game engines, they make game engines that mimic Renaissance painting. All of which again hammers home the point that media technologies are, in themselves, cultural products, not just conduits for content that expresses cultural ideas.
So, to sum up our discussion so far, it’s important to think about media technologies, not just media content. We need to find a way to tease apart what’s new, interesting, and culturally significant about contemporary media technologies, without falling into the trap of assuming that they’re different from everything that came before. And, paying attention to the particular features of different mediums and devices, how similar functionality is progressively developed and mashed up in new combinations, seems like a pretty promising approach.
But there are also a lot of different directions you can go with this idea. You can focus, for instance, on how particular features of technologies interact with the behavior of individual people. To give one example, Jeff Hancock, Jennifer Thom-Santelli, and Thompson Ritchie, a group of social psychologists and communication researchers, did a study in which they looked at a list of particular features of communication technologies that people use to interact with one another. These included whether the technology was used to communicate in real-time or not, whether it typically kept a record of the conversation, and whether it required you to be in the same room with the person you were talking with. For example, a phone conversation happens in real-time, but an email exchange usually doesn’t. Instant messaging and email conversations may leave a record of what was said, but phone calls typically don’t. And most media technologies differ from face-to-face communication in that they let you converse with people who aren’t physically near you.
The researchers compared this list of features to the conditions under which people most often tell lies. For instance, people typically lie more often when they’re conversing in real-time, because awkward situations crop up more spontaneously and they have to be resolved more quickly. Like when someone suddenly asks you if you’d like to get a cup of coffee tomorrow, or whether you like his new jacket. It’s no surprise that people also lie more often when their conversations aren’t being recorded, since they’re less likely to be held accountable. And it’s also a lot easier to be deceptive when you’re not in the same room with somebody. Telling your parents you’re reading your physics textbook when you’re actually looking at Yik Yak, or saying you’re on your way somewhere when you’re actually just getting into the car, are both things that might work over the phone or by text message, but not in person.
And what Hancock and his fellow researchers found was that if you added up the number of deception-friendly features a particular media technology had, you got a pretty good picture of how often it was used to tell lies. Phone conversations happen in real-time, at a distance, they leave no record of the conversation. And people lied on the phone more often than over any other medium the researchers studied. Email exchanges happen slowly and keep a record of what’s said, and people lied least over email. Meanwhile, face-to-face conversation and instant messaging fell somewhere in between these two extremes.
You might think of a study like this one by Hancock and his colleagues as sitting at one end of the spectrum when it comes to comparing the features of different media technologies and how those intersect with our social world. We could call this the micro end, in which researchers are looking at the use of technologies by individuals engaging in discrete conversations. They might repeat these studies many times to gather enough data to make predictions, but they’re still ultimately interested in the psychology of individuals and small groups.
At the other end of the spectrum, the macro end so to speak, are researchers whose philosophy might better be summarized as “go big or go home.” These folks are the group Meyrowitz is largely referring to when he talks about medium theorists. They include past scholars like Harold Innis, Marshall McLuhan, and Walter Ong, but also contemporary scholars like Meyrowitz himself, who are interested in what effects the various features of particular media technologies have on large groups, whole societies, even the course of history.
To give a prominent example, prior to the invention of the written word, interactions between people and circulation of information were confined to what could be shared in face-to-face communication. This meant it was hard to organize a group of people much larger than your immediate social circle. Because unless everyone knew and could keep track of everyone else, things would begin to go badly. So people lived in little villages that were small enough both geographically and in terms of population to keep themselves going with only face-to-face interactions. To the extent that these groups had anything like a library of information, it had to be kept in the form of oral history. Which meant that people spent a lot of time and effort memorizing things and reciting them for other people to memorize, so that records of particular events would live on.
Writing, and particularly writing on surfaces like papyrus or waxed tablets that were easy to cart around, made it possible for records to be kept without the huge mental labor of memorization, and for messages to pass between people at a distance. Both of which, according to medium theorists, changed the very nature of society. They allowed people to connect with one another in social networks that extended beyond their immediate surroundings, and to organize social activities at a scale that would have been unimaginable beforehand.
Then eventually, goes the argument, you get the printing press, which makes written materials more accessible and hence even more valuable as a form of communication and organization. At least for the folks who knew how to read and write. Which tended to be mostly upper and middle class folks. Among those with access to the printed word, reading fostered greater individuality. Whereas in an oral society any knowledge you have beyond your own direct experience was dictated by what the group you were a part of knew, now, through reading, you had access to information and social contacts different from those of your neighbors.
Printing also marks a mode of addressing others that’s very different from what you had in oral societies. Mass-produced pamphlets and books, while they might be the work of a single person, were intended to be circulated to and read by many. And unlike the hand-written letters exchanged before the printing press, most of which were passed between people who knew one another, printed materials could reach an audience without total reliance on people’s networks of social contacts. Sure, you might borrow a book from a friend or read an article pointed out to you by someone you worked with, but you’d also read books and newspapers you picked up on your own.
Eventually, electronic mass media like the radio and television come on the scene. Which according to medium theorists put society into a weird collective mental space. They were media that seemed like older styles of oral communication in that they had many of the features we associated with gossip and face-to-face interaction. But at the same time, the “community” that the President or the anchor of the evening news addresses on live TV is much larger than a little village. In this sense, they’re mass media, not unlike books and newspapers before them.
What’s more, State of the Union addresses and evening newscasts are scripted. They’re underpinned by the written word in a way that’s also more similar to the preparation of a book then to the sorts of exchanges you might have witnessed in an ancient oral culture. The odd way in which these technology simultaneously evoke conflicting associations with oral society on the one hand, and older forms of mass media on the other, is what Marshall McLuhan originally meant when he talked about the global village.
So even though radio and then television, with their announcers and talking heads, may have been dominant forms of media for much of the last century, this wasn’t quite the resurgence of oral society. For this reason, historian Walter Ong called the forms of rhetoric ushered in by technologies like radio, TV, and the tape deck “secondary orality.”
Finally, medium theorists have had to grapple with our newest new media, the Internet and social media, for example. One of the things that’s been remarked on frequently about this environment is the way in which messages once again spread by passing from person to person, rather than from the sorts of single, centralized sources that characterize the heyday of mass circulation print newspapers, or the broadcast networks, for example. None of which is to say that older forms of mass media have gone away, or that we don’t encounter a lot of their content online.
But digital media tools also allow ordinary people to create a lot of the stuff that’s spreading from person to person in this way, from fan fiction to YouTube remixes. More than a few people have characterized the way people generate and spread content collaboratively using digital tools as a return to the way news, information, and culture were created and spread before the rise of the book and other mass media. And in fact, given how briefly mass media have been around in the grand scheme of things, in comparison to the longer arc of human history, more than one media historian has remarked that if anything social media may be a return to the norm. It was, it turns out, the age of mass media that was the weird exception.
One of the more popular names for this notion comes from professors Lars Ole Sauererg and Thom Pettitt, who came up with the memorable phrase “the Gutenberg Parenthesis.” As Pettitt describes it,
…as in a sentence. We have been through our sentence. The sentence which is the history of the media has been interrupted by the age of print, by a printing, a book phase. And that insofar as we are leaving that book phrase, we are going back. We are going back to the situation before that. Without any implications that the period in between was a waste of time or going in the wrong direction, or misguided. It’s not parenthesis in any pejorative sense. It’s like in a sentence— If you’re speaking a sentence or writing a sentence, you interrupt for a while with a second thought to add to your first thought. You then resume the first thought at the end of the parenthesis, and the sentence goes on. But that sentence will be irrevocably changed by what has happened.
Thomas Pettitt, “The Gutenberg Parenthesis” at 19:28
Tom Standage, in his book Writing on the Wall, has also pushed a popularized version of the Gutenberg Parenthesis. And you’ll find similar ideas in writing by a range of scholars and journalists, from law professor Lawrence Lessig to tech writer Nicholas Carr.
Of course, this notion that we can divide all of human history into roughly three periods, an oral society epic, followed by the printing press and its mass media descendants, and finally by an era of digital media that reversed many of the changes wrought by the mass media… Well, it all sounds like a gross oversimplification.
For example, we’ve already seen that some mass media, like radio, started out in a relatively participatory fashion, not unlike what we associate with the Internet and social media today. And for their part, Sauererg and Pettitt both suggest that their idea of a Gutenberg Parenthesis was intended to be provocative, meant not as an entirely nuanced explanation but as a way of shaking people who grew up with books, radio, television, and summer blockbusters out of their comfortable assumptions about what was old and what was new.
Another solution to this problem of how to break us out of problematic assumptions about what’s old and what’s new was posed by communication historian Ben Peters, who proposes an idea he dubs “renewable media.” Like Tim Wu, who we encountered in the last installment, Peters sees a pattern to the way new media emerge and evolve over history, though he avoids painting it in terms of an inevitable cycle that rotates from invention to commercialization to monopoly and back to the invention of the next thing. According to Peters,
New media can be understood as emerging communication and information technologies undergoing a historical process of contestation, negotiation, and institutionalization.
Benjamin Peters, “And Lead Us Not into Thinking the New is New: A Bibliographic Case for New Media History”, p18 [via Peters’ website]
For Peters, similar to Wu, things start with invention and move toward commercialization, which ultimately leads to media technologies becoming mundane, taken-for-granted channels and gadgets. The wallpaper of our existence, so to speak. But there are some important differences between Wu’s and Peters’ arguments. There are a lot of distinctions we could make, in fact, but here are a couple.
First, Peters notes that invention often really doesn’t look like much. In the moment, things we later regard as important new media often seem like predictable improvements to older technologies. Radio, to give a now-familiar example, was originally thought of as a way to make a telegraph without stringing wires. And later, when it became possible to transmit voices, it was at times spoken of as a sort of wireless party line telephone.
Likewise some of the key networking technologies that led to the Internet, while impressively inventive, were first conceived of mostly as ways to let more than one programmer work on a mainframe computer at the same time. But perhaps most interestingly, Peters points out that a particular medium can be new more than once.
Each medium may have a few basic ideas that take many forms.
Benjamin Peters, “And Lead Us Not into Thinking the New is New: A Bibliographic Case for New Media History”, p22 [via Peters’ website]
The telegraph provides a nice example of what he’s getting at here. There’ve been lots of instances over history of schemes for sending a message instantaneously over a distance. Smoke signals, for example, may not rely on electricity, but like Morse Code they’re a series of pulses (in this case puffs of smoke) with a meaning agreed upon by a sender and a receiver.
And, as with the electric telegraph, if you wanted to send your message over a longer distance, you could extend the range of the system by chaining together multiple senders and receivers. In ancient China, for example, a series of relay stations along the Great Wall famously used smoke signals to pass messages over hundreds of miles, a feat that appeared to have been manageable in the span of just a few hours.
And smoke signals are just one example of a variety of systems, sometimes called optical telegraphs, a category that also includes other forms of code and sign language transmitted visually between senders and receivers separated by great distances. Specialized flags, hand gestures, lanterns, and torches have all been used in similar ways.
In the 18th century, for example, Napoleon used a system of mechanically-operated flags called semaphores to transmit messages between a chain of senders and receivers stationed in towers. These so-called “semaphore lines” stretched for over thirty-one hundred miles. And while messages didn’t typically need to be transmitted across this entire expanse, you could say, get a message from Paris to the French border in a matter of three, maybe four hours.
And if you want to move forward in history from the electric telegraph, that’s interesting, too. Engineers eventually moved away from using combinations of long and short electrical pulses to represent each letter of the alphabet as it was transmitted over the wire. Instead, they used a similar but distinct system in which letters were represented by electrical pulses and pauses, moments when no electrical pulse was being sent. This transition from Morse’s brief and long pulses, called dots and dashes, to pulses and pauses of equal length, which came to be known as marks and spaces, made the telegraph easier to operate with automated equipment like keyboards.
Marks and spaces are also easily represented with ones and zeros. And so the system of codes developed for the telegraph was adapted once again to allow computers to represent letters of the alphabet and work with input from a keyboard. And today, as historian Carolyn Marvin points out, when we send any sort of text over the Internet, or fire off a text message to a friend, our computers and phones, and the networks of technologies that connect them, are essentially just acting as really fast automated telegraphs.
Each of these technological systems that was such a big deal in its time, smoke signals, the semaphore line, the telegraph, the computer, and now the Internet, turn out to all be versions of the same idea mashed up with the latest forms of automation. This is what Peters means when he says that media technologies are not new, but renewable. Each successive wave of a particular technological idea is a response to historical conditions, to the needs of the moment. And each will bear the stamp of the particular social and political context in which it occurs, whether it’s Napoleon’s French empire, or the wake of the 1960s counterculture. Once again, we can see the media technologies are cultural products. Each time a media technology is renewed, there’ll be skirmishes and debates over whether and how each should be developed, used, commercialized, and regulated. And these debates will get settled a little bit differently, sometimes a lot differently, in each case.
Figuring out how to tease apart the social, political, and historical context surrounding particular media technologies, and whether certain technologies are somehow political in their own right are subjects we’ll be turning our attention to over the next couple weeks. For now it’s enough to admit that despite the patterns we can find in history, getting a handle on the new media of our own time is tricky, what with the need to pay attention to all the social complexity. Which is why Peters gives us one other definition for new media.
New media are media we do not yet know how to talk about.
Benjamin Peters, “And Lead Us Not into Thinking the New is New: A Bibliographic Case for New Media History”, p22 [via Peters’ website]
Hopefully, we’re learning.
Thanks for listening. This installment included a clip of a lecture by Thomas Pettitt from MIT’s Comparative Media Studies Program, and drew heavily on the scholarship of Joshua Meyrowitz, Ben Peters, Paul du Gay, Stuart Hall, Linda Janes, Anders Koed Madsen, Hugh MacKay, Keith Negus, Jeffrey Hancock, Jennifer Thom-Santelli, Thompson Ritchie, Jay David Bolter, and Richard Grusin. And I’d like to extend a special thanks to Joshua Meyrowitz and Ben Peters for reading passages from their essays just for us.
As always you can find a complete bibliography for this installment, including music credits, on our course website.
The original recording of this lecture is available at Culture Digitally.