PRAISE FOR HOW TO SEE THE WORLD
“Traveling to the moon and back, across continents and between eras, Mirzoeff nimbly and effectively narrates the visual regimes that regulate what we see, how we see, and what remains completely hidden from view. Indispensable reading for the twenty-first century.”
—Jack Halberstam, Professor of American Studies and Ethnicity, Gender Studies, and Comparative Literature, University of Southern California and author of The Queer Art of Failure
“This book will transform the way you see the world—and how you want to change it. Eloquently written, it offers new insights on everything from selfies as a new global art form to impressionist art as evidence of global climate change. It is simply brilliant.”
—Wendy Hui Kyong Chun, Professor and Chair, Modern Culture and Media, Brown University
“Nicholas Mirzoeff connects images and practices of looking to political struggles, up to and including the survival of the planet. In clear, powerful prose, Mirzoeff shows how visuality is at the very center of struggles for social justice, at moments of both domination and resistance. An essential introduction for anyone who wants to understand visuality today.”
—Jonathan Sterne, James McGill Professor in Culture and Technology, McGill University and author of MP3: The Meaning of a Format
“A lucid and accessible introduction to how images shape our lives and effect change, political and social . . . [How to See the World] offers numerous insights into ‘reading’ the significance of images in the world today and is filled with intriguing, insightful nuggets. A challenging and provocative inquiry into how we see the world.”
—Kirkus Reviews
“Beautifully written, with a broad sweep of examples that speak to the power of images and encourage us to see and think in new ways, How to See the World is the go-to book for scholars and students in fields ranging from political science and anthropology to art history.”
—Suzanne Blier, Allen Whitehill Clowes Chair of Fine Arts and of African and African American Studies, Harvard University
Also by Nicholas Mirzoeff:
An Introduction to Visual Culture
The Visual Culture Reader
The Right to Look
Copyright © 2016 by Nicholas Mirzoeff.
Published by Basic Books,
A Member of the Perseus Books Group
First published in the United Kingdom by Pelican Books, an imprint of Penguin Books. Penguin Books is part of the Penguin Random House group of companies whose addresses can be found at global.penguinrandomhouse.com.
All rights reserved. Printed in the United States of America. No part of this book may be reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in critical articles and reviews. For information, contact Basic Books, 250 West 57th Street, New York, NY 10107.
Books published by Basic Books are available at special discounts for bulk purchases in the United States by corporations, institutions, and other organizations. For more information, please contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street, Suite 200, Philadelphia, PA 19103, or call (800) 810-4145, ext. 5000, or e-mail special.markets@perseusbooks.com.
Designed by Jack Lenzo
Library of Congress Control Number: 2015956107
ISBN: 978-0-465-09601-5 (e-book)
10 9 8 7 6 5 4 3 2 1
For Kathleen, as ever
CONTENTS
Introduction How to See the World
Chapter 1 How to See Yourself
Chapter 2 How We Think About Seeing
Chapter 3 The World of War
Chapter 4 The World on Screen
Chapter 5 World Cities, City Worlds
Chapter 6 The Changing World
Chapter 7 Changing the World
Afterword Visual Activism
Further Reading
Acknowledgments
List of Illustrations
Notes
Index
INTRODUCTION
HOW TO SEE THE WORLD
Figure 1. NASA, Blue Marble
In 1972, the astronaut Jack Schmitt took a picture of Earth from the Apollo 17 spacecraft, which is now believed to be the most reproduced photograph ever. Because it showed the spherical globe dominated by blue oceans with intervening green landmasses and swirling clouds, the image came to be known as Blue Marble.
The photograph powerfully depicted the planet as a whole, and from space: no human activity or presence was visible. It appeared on almost every newspaper front page around the world.
In the photograph, Earth is viewed very close to the edge of the frame. It dominates the picture and overwhelms our senses. Since the spacecraft had the sun behind it, the photograph was unique in showing the planet fully illuminated. The Earth seems at once immense and knowable. Taught to recognize the outline of the continents, viewers could now see how these apparently abstract shapes were a lived and living whole. The photograph mixed the known and the new in a visual format that made it comprehensible and beautiful.
At the time it was published, many people believed that seeing Blue Marble changed their lives. The poet Archibald MacLeish recalled that for the first time people saw the Earth as a whole, “whole and round and beautiful and small.” Some found spiritual and environmental lessons in viewing the planet as if from the place of a god. Writer Robert Poole called Blue Marble “a photographic manifesto for global justice” (Wuebbles 2012). It inspired utopian thoughts of a world government, perhaps even a single global language, epitomized by its use on the front cover of The Whole Earth Catalog, the classic book of the counterculture. Above all, it seemed to show that the world was a single, unified place. As Apollo astronaut Russell (“Rusty”) Schweickart put it, the image conveys
the thing is a whole, the Earth is a whole, and it’s so beautiful. You wish you could take a person in each hand, one from each side in the various conflicts, and say, “Look. Look at it from this perspective. Look at that. What’s important?”
No human has seen that perspective in person since the photograph was taken, yet most of us feel we know how the Earth looks because of Blue Marble.
That unified world, visible from one spot, often seems out of reach now. In the forty years since Blue Marble, the world has changed dramatically in four key registers. Today, the world is young, urban, wired, and hot. Each of these indicators has passed a crucial threshold since 2008. In that year, more people lived in cities than the countryside for the first time in history. Consider the emerging world power Brazil. In 1960, only a third of its people lived in cities. By 1972, when Blue Marble was taken, the urban population had already passed 50 percent. Today, 85 percent of Brazilians live in cities, no less than 166 million people.
Most of them are young, which is the next indicator. By 2011, more than half the world’s population was under thirty; 62 percent of Brazilians are twenty-nine or younger. More than half of the 1.2 billion Indians are under twenty-five, and a similar young majority exists in China. Two thirds of South Africa’s population is under thirty-five. According to the Kaiser Family Foundation, 52 percent of the 18 million people in Niger are under fifteen and in most of sub-Saharan Africa, over 40 percent of the population is under fifteen. The populations of North America, Western Europe, and Japan may be aging, but the global pattern is clear.
The third threshold is connectivity. In 2015, 45 percent of the world’s population had some kind of access to the Internet, up 806 percent since 2000. It’s not just Europe and America that are connected: 45 percent of those with Internet access are in Asia. Nonetheless, the major regions that lack connection are sub-Saharan Africa (other than South Africa) and the Indian subcontinent, creating a digital divide on a global level. By the end of 2014, an estimated 3 billion people were online. By the end of the decade, Google envisages 5 billion people on the Internet. This is not just another form of mass media. It is the first universal medium.
One of the most notable uses of the global network is to create, send, and view images of all kinds, from photographs to video, comics, art, and animation. The numbers are astonishing: three hundred hours of YouTube video are uploaded every minute. Six billion hours of video are watched every month on the site, one hour for every person on earth. The 18–34 age group watches more You-Tube than cable television. (And remember that YouTube was only created in 2005.) Every two minutes, Americans alone take more photographs than were made in the entire nineteenth century. As early as 1930, an estimated 1 billion photographs were being taken every year worldwide. Fifty years later, it was about 25 billion a year, still taken on film. By 2012, we were taking 380 billion photographs a year, nearly all digital. One trillion photographs were taken in 2014. There were some 3.5 trillion photographs in existence in 2011, so the global photography archive increased by some 25 percent or so in 2014. In that same year, 2011, there were 1 trillion visits to YouTube. Like it or not, the emerging global society is visual. All these photographs and videos are our way of trying to see the world. We feel compelled to make images of it and share them with others as a key part of our effort to understand the changing world around us and our place within it.
The planet itself is changing before our eyes. In 2013, carbon dioxide passed the signature threshold of four hundred parts per million in the atmosphere for the first time since the Pliocene era about 3 to 5 million years ago. Although we cannot see the gas, it has set in motion catastrophic change. With more carbon dioxide, warm air holds more water vapor. As the ice caps melt, there is more water in the ocean. As the oceans warm, there is more energy for a storm system to draw on, producing storm after “unprecedented” storm. If a hurricane or earthquake creates what scientists call a high sea-level event, like a storm surge or tsunami, the effects are dramatically multiplied. Record-setting floods have followed around the world from Bangkok to London and New York, even as other areas—from Australia to Brazil, California, and equatorial Africa—suffer unprecedented drought. The world today is physically different from the one we see in Blue Marble, and it is changing fast.
For all the new visual material, it is often hard to be sure what we are seeing when we look at today’s world. None of these changes are settled or stable. It seems as if we live in a time of permanent revolution. If we put together these factors of growing, networked cities with a majority youthful population, and a changing climate, what we get is a formula for change. Sure enough, people worldwide are actively trying to change the systems that represent us in all senses, from artistic to visual and political. This book seeks to understand the changing world to help them and all those trying to make sense of what they see.
To get an impression of the distance we have come since Blue Marble, consider two photographs from space taken in 2012. In December 2012, the Japanese astronaut Aki Hoshide took his own picture in space. Ignoring the spectacle of Earth, space, and moon, Hoshide turned the camera on himself, creating the ultimate selfie, or self-taken self-portrait.
Figure 2. Hoshide, Untitled, selfie
Ironically, any trace of his appearance or personality disappears in this image as his reflective visor shows us only what he is looking at—the International Space Station, and below it, the Earth. Where Blue Marble showed us the planet, Hoshide wants us to see just him. It is nonetheless an undeniably compelling image. By echoing the daily practice of the selfie, the camera and the picture make space real and imaginable to us in an even more direct way than Blue Marble, but with none of the social impact of the earlier image. The astronaut is invisible and unknowable in his own self-portrait. There is, it seems, more to seeing than being in the place to see.
In that same year, 2012, NASA created a new version of Blue Marble. The new photograph was actually a composite assembled from a series of digital images produced by a satellite. From the satellite’s orbit, approximately 580 miles above the surface, the full view of the planet is not in fact visible. You have to go over 7,000 miles away before the entire globe can be seen. The resulting color-corrected “photograph,” adjusted to show the United States rather than Africa, is now one of the most accessed images on the digital photo archive Flickr, with over 5 million downloads.
We can “recognize” the Earth from Blue Marble, but only the three-man crew of Apollo 17 has ever actually seen this view, with the Earth fully illuminated, and no one has seen it since 1972. The 2012 Blue Marble is made to seem as if it was taken from one place in space but it was not. It is accurate in each detail, but it is false in that it gives the illusion of having being taken from a specific place at one moment in time. Such “tiled rendering” is a standard means of constructing digital imagery. It is a good metaphor for how the world is visualized today. We assemble a world from pieces, assuming that what we see is both coherent and equivalent to reality. Until we discover it is not.
Figure 3. NASA, Blue Marble 2012
A striking demonstration of how what seems to be a solid whole is actually a composite of assembled pieces came with the 2008 financial crash. What mainstream economists and governments alike had asserted to be the perfectly calculated, global financial market collapsed without warning. It turned out that the system was so finely leveraged that a relatively small number of people, who were unable to keep up with their mortgages, set in motion a rolling catastrophe. The very connectedness of the global financial market made it impossible to contain what would once have been a local misfortune. The crisis shows that it is one world now, like it or not.
At the same time, “one world” does not mean it is equally available to all. Moving country for personal or political reasons is often very difficult, and partly depends on your passport. A British-passport holder can visit 167 countries without a visa. An Iranian passport, however, gets you into only 46 countries. Money, on the other hand, can move wherever it wants at the click of a keyboard. Prior to 1979, it was illegal for Chinese citizens to even possess foreign currency. Today China dominates global trade. There is globalization in theory, which is smooth and easy. And there is the uneven, difficult, and time-consuming experience of globalization in practice. The ads and the politicians tell us there is a single global system now, at least for financial affairs. Our daily lives tell us otherwise.
VISUAL CULTURE
This book is designed to help you see the much-changed and changing world. It is a guide to the visual culture we live in. Like history, visual culture is both the name of the academic field and that of its object of study. Visual culture involves the things that we see, the mental model we all have of how to see, and what we can do as a result. That is why we call it visual culture: a culture of the visual. A visual culture is not simply the total amount of what has been made to be seen, such as paintings or films. A visual culture is the relation between what is visible and the names that we give to what is seen. It also involves what is invisible or kept out of sight. In short, we don’t simply see what there is to see and call it a visual culture. Rather, we assemble a worldview that is consistent with what we know and have already experienced. There are institutions that try to shape that view, which the French historian Jacques Rancière calls “the police version of history,” meaning that we are told to move on, there’s nothing to see here (2001). Only, of course, there is something to see, we just usually choose to let the authorities deal with the situation. If it is a traffic accident, that may be appropriate. If it is a question of how we see history as a whole, then surely we should be looking.
The concept of visual culture as a specific area of study first began to circulate at a previous moment of vital change in the way we see the world. Around 1990, the end of the Cold War that had divided the globe into two zones, more or less invisible to each other, coincided with the rise of what was called postmodernism. The postmodern changed modern skyscrapers from austere rectangular blocks into the playful towers, with kitschy and pastiche features, which now dominate skylines worldwide. Cities looked very different. A new identity politics formed around questions of gender, sexuality, and race, leading people to see themselves differently. This politics was less confident in the global certainties of the Cold War period and began to doubt the possibility of a better future. In 1977, at a time of social and economic crisis in Britain, the Sex Pistols had pithily summarized the mood as “No Future.” These changes were accelerated by the beginnings of the era of personal computing that transformed the mysterious world of cybernetics, as computer operations had been known, into a space for individual exploration, named in 1984 by science fiction writer William Gibson as “cyberspace.” Visual culture burst onto the academic scene at that time, mixing feminist and political criticism of high art with the study of popular culture and the new digital image.
Today there is a new worldview being produced by people making, watching, and circulating images in quantities and ways that could never have been anticipated in 1990. Visual culture is now the study of how to understand change in a world too enormous to see but vital to imagine. A vast new range of books, courses, degrees, exhibitions, and even museums all propose to examine this emerging transformation. The difference between the concept of visual culture in 1990 and the one we have today is the difference between seeing something in a specific viewing space, such as a museum or a cinema, and in the image-dominated network society. In 1990, you had to go to a cinema to see films (except reruns on TV), to an art gallery to see art, or visit someone’s house to see their photographs. Now of course we do all that online and moreover, whenever we happen to choose to do so. Networks have redistributed and expanded the viewing space, while often contracting the size of the screen on which images are viewed and deteriorating their quality. Visual culture today is the key manifestation in everyday life of what sociologist Manuel Castells calls “the network society,” a way of social life that takes its shape from electronic information networks (1996). It is not just that networks give us access to images—the image relates to networked life on- and offline and the ways we think about and experience those relations.
Simply put, the question at stake for visual culture is, then, how to see the world. More precisely, it involves how to see the world in a time of dynamic change and vastly expanded quantities of imagery, implying many different points of view. The world we live in now is not the same as it was just five years ago. Of course, this has always been true to some extent. But more has changed and changed more quickly than ever and, because of the global network society, change in one location now matters everywhere.
Rather than try to summarize the immense quantity of visual information available, this book offers a toolkit for thinking about visual culture. Its way of seeing the world centers on the following ideas:
• All media are social media. We use them to depict ourselves to others.
• Seeing is actually a system of sensory feedback from the whole body, not just the eyes.
• Visualizing, by contrast, uses airborne technology to depict the world as a space for war.
• Our bodies are now extensions of data networks, clicking, linking, and taking selfies.
• We render what we see and understand on screens that go everywhere with us.
• This understanding is the result of a mixture of seeing and learning not to see.
• Visual culture is something we engage in as an active way to create change, not just a way to see what is happening.
While the present day is the focus, much of this book is nonetheless historical, as it traces the roots of visual culture today, both as a field of study and a fact of everyday life. The emphasis is no longer on the medium or the message, with apologies to Marshall McLuhan (1964). Instead, the emphasis is on creating and exploring new archives of visual materials, mapping them to discover connections between what is visual and the culture as a whole, and realizing that what we are learning to see above all is change on the global scale.
The book begins by looking at the evolution of the self-portrait into the omnipresent selfie. The selfie is the first visual product of the new networked, urban global youth culture. Because the selfie draws on the history of the self-portrait, it will also allow us to explore the creation of the academic discipline of visual culture that emerged around 1990. How we see ourselves leads to the question of how we see, and the remarkable insights of neuroscience (Chapter 2). Human vision now seems like the multifaceted feedback loop that visual artists and visual culture scholars have long assumed it to be. Seeing is not believing. It is something we do, a kind of performance. What this performance is to everyday life, visualizing is to war (Chapter 3). Battlefields were visualized first in the mind’s eye of the general and then from the air by balloons, aircraft, satellites, and now drones. These views of the world are not experienced directly but on screens. So, Chapter 4 looks at two examples of the creation of networked worlds: the view seen from a train and the creation of motion pictures, and today’s ubiquitous networked digital screens. Those screens appear to offer unlimited freedom but are carefully controlled and filtered views of the world.
The key places in these networks are the global cities, where most of us now live (Chapter 5). In these immense, dense spaces, we learn how to see—and also not to see potentially disturbing sights—as a condition for daily survival. Global cities have grown up around the remains of the imperial and divided Cold War cities that preceded them. They are spaces of erasure, ghosts, and fakes. The creation of the global city world has come at tremendous cost. Now we have to learn how to see the changing natural world (Chapter 6). Or more exactly, we have to become aware of how humans have turned the planet into one enormous human artifact, the largest work of art ever made or ever possible.
At the same time, the global city has also become rebellious, the site of permanent unrest (Chapter 7). Here the youthful majority in cities use their connections to claim new ways to represent themselves on social media that are transforming what politics means, from the city revolts in the developing world, such as those in Cairo, Kiev, and Hong Kong, to the separatist movements in the developed world, from Scotland to Catalonia. Do we live in cities? Or regions? Or nations? Or power blocs like the European Union? How do we see the place where we live in the world?
THE TIME OF CHANGE
Though the transformations of the present may appear unprecedented, there have been many similar periods of dramatic change in the visible world before. The nineteenth century was famously described by the historian Jean-Louis Comolli as a “frenzy of the visible” because of the invention of photography, film, X-ray, and many other now forgotten visual technologies in the period (1980). The development of maps, microscopes, telescopes, and other devices made the seventeenth century another era of visual discovery in Europe. And so we could continue back to the first cosmographic representation of the world on a clay tablet from 2500 B.C.E. But the transformation of the visual image since the rise of personal computing and the Internet is different in terms of sheer quantity, geographic extent, and its convergence on the digital.
If we look in a longer historical perspective, we can perceive the extraordinary pace of change. The first moving images were recorded by the Lumière brothers in France in 1895. A little more than a century later, the moving image has become astonishingly widespread and easily available. The first available video cameras for personal use appeared only in 1985. They were heavy, shoulder-borne devices and not well suited for casual use. It was not until the invention of digital videotape in 1995 that home video became a practical possibility. Editing was still an expensive and difficult proposition until the introduction of programs like Apple’s iMovie in 2000. And now you can shoot and edit HD video on your phone and post it to the Internet. Above and beyond personal possession, far more people can see and share all this material via the Internet, the first truly global medium. More people still have access to television, but hardly anyone has influence over what is shown on television and fewer still can place their own work on TV. By the end of the decade, the Internet will change how we look at everything, including how we see the world.
To understand the difference, we can compare the distribution and circulation of printed matter. In 2011, according to UNESCO, there were over 2.2 million books published. The last European who was thought to have read all available printed books was the sixteenth-century reformer Erasmus (1466–1536). Over the long lifetime of print, many other means of getting published have emerged, from the letter to the editor to self-produced pamphlets and photocopied documents. The book has still remained the format most likely to convince and impress. However, book publishing is open only to authors who can convince editors to produce their work. Now the Internet allows everyone with a connection to disseminate their writing in ways that are not visibly different from those used by formal book publishers. The global success of E. L. James’s self-published novel Fifty Shades of Grey, which has sold over 125 million copies in its Random House reprint, would not have been imaginable even a decade ago. The transformation of visual images, especially moving images, has been still faster and more extensive.
The change at hand is not simply one of quantity but of kind. All the “images,” whether moving or still, that appear in the new archives are variants of digital information. Technically, they are not images at all, but rendered results of computation. As digital scholar Wendy Hui Kyong Chun puts it, “when the computer does let us ‘see’ what we cannot normally see, or even when it acts like a transparent medium through video chat, it does not simply relay what is on the other side: it computes” (2011). When an ultrasound scanner measures the inside of a person’s body using sound waves, the machine computes the result in digital format and renders it as what we take to be an image. But it is only a computation. A modern camera still makes a shutter sound when you press the button, but the mirror that used to move, making that noise, is no longer there. The digital camera references the analog film camera without being the same. In many cases, what we can “see” in the image, we could never see with our own eyes. What we see in the photograph is a computation, itself created by “tiling” different images that were further processed to generate color and contrast. It is a way to see the world enabled by machines.
Analog photographs were certainly also manipulated, whether by editing or darkroom-derived techniques. Nonetheless, there was some form of light source, impacting a light-sensitive surface that we can work out from the resulting photograph. A digital image is a computed rendition of digital input, derived from the camera’s sensor. So, it is much easier and faster to alter the result, especially now that programs such as Instagram will create effects at a single click. Some of these effects imitate specific formats, like black-and-white film or Polaroid. Others mimic skilled techniques that would have been used in the darkroom when developing film.
Early in the digital era, some were concerned that we would not be able to tell whether digital images had been manipulated or not. It turns out that at both amateur and professional levels, it is often not that hard to detect. For example, most magazine readers now assume that all photographs of models and celebrities have been adjusted. Readers operate a flexible zone of viewing, in which it is accepted that a photograph can be altered but not changed so much that it’s absurd. Some advertising campaigns now even celebrate their use of “real” models, knowing that we understand ordinary advertising photographs are manipulated. At the technical level, a skilled user can tell not only if an image has been manipulated, but how and with what program. In early 2013, a star college American football player named Manti Te’o was discovered to have created a story regarding the death of a fake girlfriend to gain sympathy and attention. Once web users were alerted to this possibility, it took less than twenty-four hours for them to reverse research the photograph he had circulated and discover it was not the woman he claimed. There are websites devoted to reverse searches now. Previously, it would have required a detective to do in days or weeks what can be done in seconds with a few clicks.
At the time of the Apollo 17 mission in 1972, the British art historian John Berger made a brilliant television series and an accompanying book for the BBC called Ways of Seeing. The immense success of both projects put the concept of the image into popular circulation. Berger defined the image as “a sight which has been recreated or reproduced” (1973). He flattened the hierarchy of the arts by making a painting or sculpture equivalent in this sense to a photograph or an advertisement. Berger’s insight was central to the formation of the concept of visual culture. An influential definition of visual culture in the 1990s was simply “a history of images” (Bryson, Holly, and Moxey 1994). Berger had himself been taking a cue from the German critic Walter Benjamin, whose famous 1936 essay “The Work of Art in the Age of Mechanical Reproduction” had just been translated into English (1968). Benjamin argued that photography destroyed the idea of the unique image because—at least in theory—infinite and identical copies of any photograph could now be made and distributed. By 1936, this was already old news, because photography was almost a century old. However, new techniques for the mass reproduction of high-quality photographs in magazines and books, as well as the rise of the talkies, or films with sound, convinced Benjamin that a new era was at hand.
With the astonishing rise of digital images and imaging, it surely seems that we are experiencing another such moment. The “image” is now created, or more precisely, computed, independently of any sight that might precede it. We continue to call what we see pictures or images, but they are qualitatively different from their predecessors. An analog photograph is a print created from a negative, every molecule of which has reacted to light. Even the highest-resolution digital photograph is a sampling of what hits the sensor rendered into computer language and computed into something we can see.
Furthermore, what we are experiencing with the Internet is the first truly collective medium, a media commons, if you like. It makes no sense to think of the web as a purely individual resource. You might paint and not show anyone the results. If you put something online, you want people to engage with it. Digital commentator Clay Shirky has borrowed a phrase from the novelist James Joyce to capture the result: “Here comes everybody” (Shirky 2008). The point here is not simply the scale of the digital commons, impressive though that is. It is certainly not always the quality of the results, which are highly variable. It is the open nature of the experiment.
And that is why, despite the endless junk, the Internet matters. There is a new “us” on the Internet, and using the Internet, that is different from any “us” that print culture or media culture has seen before. Anthropologist Benedict Anderson described the “imagined communities” created by print culture so that the readers of a specific newspaper would come to feel they had something in common (1991). Above all, Anderson stressed how nations came into being as these imagined communities, with powerful and important results. Trying to understand the imaged and imagined communities created by global forms of experience is similarly central to visual culture. The new communities that are emerging on- and offline are not always nations, although they are often nationalist. From the new feminisms to the idea of the 99%, people are reimagining how they belong and what that looks like.
What all moments of visual culture have in common is that the “image” gives a visible form to time and thereby to change. In the eighteenth century, natural historians investigating fossils and sedimentary rocks made the startling discovery that the Earth was far older than the six thousand years of the biblical account (Rudwick 2005). Naturalists began to calculate how many thousands and millions of years were involved. Geologists now refer to this as deep time, a time whose scale is vast by comparison with a brief human life span but is not infinite. From this perspective, it makes sense that one of the first photographs ever taken by Louis Daguerre in 1839 depicted fossils.
Figure 4. Daguerre, Untitled (Shells and Fossils)
Of course, the fossils remained conveniently still for the camera. More important, they were crucial to nineteenth-century debates in natural history, following French scientist Georges Cuvier’s insight that fossils revealed past extinctions (1808). Fossils became central to the long drama that culminated with Charles Darwin’s Origin of Species (1859) concerning the age of the Earth. Was the planet, as certain Christian authorities insisted, only six thousand years old? Or did fossils show that it was many million years old? A photograph is defined by the length of time the light-sensitive medium, whether film or a digital sensor, is exposed to light. As soon as the shutter closes, that instant is past time. The brief exposure of Daguerre’s shutter contrasted dramatically with the millennia of geological time and revealed the new human power to save specific instants of time.
Soon, the demands of the new industrial economy forced a second change to time. Time had usually been decided locally in relation to the sun, meaning that cities or towns a few miles apart would use different time. The difference did not matter until it became necessary to calculate how trains would cover long distances to a timetable. The “absolute” time that we still use, designated in highly specific time zones, was created so as to make such calibrations of time and space possible.
In 1840, the Great Western Railway in England was the first to apply this standardized time. A few years later, the painter J. M. W. Turner gave dramatic visual form to the changes in his stunning 1844 canvas Rain, Steam and Speed: The Great Western Railway. The train rushes toward us, although our viewpoint seems to be suspended in midair. The new train, using a modern bridge, has changed time and speed for the first time since the domestication of the horse. It seems to emerge from the swirling rain as if from primeval creation, an earlier subject in Turner’s work. A frightened hare running across the tracks (hard to see in reproduction) symbolizes the overtaken forms of natural speed. Overtaken also was painting as the most advanced form of modern visual representation. For all Turner’s brilliance, his painting took weeks to make. A photograph can change the world in seconds.
Figure 5. Turner, Rain, Steam and Speed
Figure 6. Kilburn, Chartists at Kennington Common
Just a few years later, in 1848, a remarkable daguerreotype of the Chartist meeting on Kennington Common, London, was taken by William Kilburn. The Chartists demanded a new form of political representation, in which every man (not yet woman) over twenty-one could vote and anyone could be a member of Parliament, regardless of personal wealth. They wanted annual Parliaments to reduce the possibilities of corruption. The rally was called to mark their delivery of a petition to Parliament with what they claimed were 5 million signatures endorsing these goals. In less than a decade from Daguerre’s fossils, the industrial world had transformed the organization and representation of time and space by means of the new time zones and photography. These changes created a desire for a different system of political representation, a subject perfectly well suited to the new visual medium.
We are in another such moment of transformation. Events can be seen as they happen via the Internet, from a dense variety of amateur and professional perspectives, in blogs, magazines, newspapers, and social media, using still and moving images of all kinds. The gain in information is offset by the digitally enabled 24/7 work environment for professionals worldwide, while the Chinese workers who produce the digital equipment that makes the new work regimen possible are themselves expected to work eleven-hour days, plus overtime if required, with an average of one day off a month. The long struggle to limit the working day has been soundly defeated. Time-based media are newly ascendant, creating millions upon millions of slices of time, which we call photographs or videos, in what seem to be ever-shrinking formats like the six-second-long Vine. The obsession with time-based media from photography in the nineteenth century to today’s ubiquitous still- and moving-image cameras is the attempt to try and capture change itself.
In 2010, the artist Christian Marclay made an extraordinary installation called The Clock. It was a twenty-four-hour montage of clips from films all telling or showing the time so that The Clock was itself a chronometer. The very fact that it was possible to make such an immense montage of clips about time indicated that modern visual media are time-based. We date a painting to the specific year it was finished but it is impossible to tell how long it took to paint. A photograph was always of one instant that may or not be known precisely. Today, digital media are always time-stamped as part of their metadata, even if that time is not visibly recorded in the image. At least for now, in the ever-changing present that is the hallmark of urban global spaces, it seems that we use time-based media as a way of both recording and relieving our anxiety over time itself.
In all this speeding up—from the introduction of the railways to the Internet—we have burned in a matter of two centuries, and especially the past thirty-five years, the remnants of millions of years of organic matter that had become fossil fuel. This vaporizing of millennia has now caused the undoing of the hitherto infinitely slow rhythm of deep time itself. What once took centuries, even millennia, happens in a single human lifetime. As the ice caps melt, gases that were frozen hundreds of thousands of years ago are released into the atmosphere. You can say that time travel is as simple as breathing these days, at least at the molecular level. The entire planetary system, from the rocks to the highest atmosphere, is out of joint and will remain so for longer than hitherto existing human history, even if we stop all emissions tomorrow.
Where does all this lead? It is too early to tell. When the printing press was invented, it was not possible to imagine from the first publications how mass literacy would change the world. In the past two centuries, the elite military skill of visualizing, which imagined how battlefields that were too large to see with the naked eye “looked,” has been transformed into the visual culture of hundreds of millions. It is confusing, anarchic, liberating, and worrying all at once. In the chapters that follow, How to See the World will suggest how we can organize and make sense of these changes to our visual world. We will see what is on the rise, what is falling back, and what is being strongly contested. Unlike the Apollo astronauts, we will have our feet firmly on the ground. But there is more to see than they could have imagined.
CHAPTER 1
HOW TO SEE YOURSELF
In 2013, the Oxford English Dictionary announced that its word of the year was selfie, which it defined as “a photograph that one has taken of oneself, typically one taken with a smartphone or webcam and uploaded to a social media website.” Apparently, the word was used 17,000 percent more often between October 2012 and October 2013 than during the previous year, due in part to the popularity of the mobile photo-sharing site Instagram. In 2013, 184 million pictures were tagged as selfies on Instagram alone. The selfie is a striking example of how once elite pursuits have become a global visual culture. At one time, self-portraits were the preserve of a highly skilled few. Now anyone with a camera phone can make one.
The selfie resonates not because it is new, but because it expresses, develops, expands, and intensifies the long history of the self-portrait. The self-portrait showed to others the status of the person depicted. In this sense, what we have come to call our own “image”—the interface of the way we think we look and the way others see us—is the first and fundamental object of global visual culture. The selfie depicts the drama of our own daily performance of ourselves in tension with our inner emotions that may or may not be expressed as we wish. At each stage of the self-portrait’s expansion, more and more people have been able to depict themselves. Today’s young, urban, networked majority has reworked the history of the self-portrait to make the selfie into the first visual signature of the new era.
For most of the modern era, the possibility of seeing an image of oneself was limited to the wealthy and the powerful. The invention of photography in 1839 soon led to the development of cheap photographic formats that placed the portrait and the self-portrait in the reach of most working people in industrialized nations. In 2013, these two histories converged. At the funeral of Nelson Mandela on December 10 that year, Danish prime minister Helle Thorning-Schmidt took a selfie that included US president Barack Obama and UK prime minister David Cameron.
Figure 7. Thorning-Schmidt, Obama, and Cameron taking a selfie
While some commentators questioned the propriety of the moment, it marked a departure from the lifeless posed official photograph and a new investment in a popular format. The photograph of the selfie being taken was reprinted worldwide, although the selfie itself was not released to the media. Only a few weeks later, the world’s best-known actors converged around Ellen DeGeneres at the 2014 Academy Awards to be in a selfie taken by Bradley Cooper that became the most popular tweet to date (also cited as the most popular of all time). The selfie is a fusion of the self-image, the self-portrait of the artist as a hero, and the machine image of modern art that works as a digital performance. It has created a new way to think of the history of visual culture as that of the self-portrait.
THE IMPERIAL SELF
These intersections—of the self-portrait, the machine image, and the digital—have their sources in the history of art, which we can follow. The Spanish painter Diego Velázquez’s masterpiece Las Meninas (1656) linked the aura of majesty to that of the self-portrait. The painting is a set of visual puns, plays, and performances that revolve around the self-portrait of the artist.
As we look at the painting, Velázquez stands to our left-hand side, holding his brushes. The canvas he is working on blocks our view. In the foreground we see the Maids of the title, the curtseying women, who are the attendants of the little girl in white. She is a princess, known as the Infanta, daughter of Philip IV of Spain. At once we notice that almost everyone in the painting is looking at someone or something, which appears to be located at the viewer’s vantage point. As we look back into the painting, we see two figures in a frame on the wall behind the main group. The frame is much brighter than the other gloomy paintings hanging on the wall, and we conclude that it must be a mirror. In fact, is it not reflecting the people everyone is looking at? And these are no ordinary people. They are the king and queen, which is why everyone seems frozen to the spot.
Figure 8. Velázquez, Las Meninas
In a famous analysis of the painting in his book The Order of Things, French philosopher Michel Foucault described it as depicting not just what could be seen within it but the very means of ordering and representing a society ([1966] 1970). The subject of the portrait is the ways in which it is possible to depict living things in a hierarchy depending on the presence of the king, ranging from the dog at the front, to the dwarf who was a court jester, the ladies-in-waiting and other nobility, the painter, and the royal presence. Foucault’s approach in turn helped inspire what was called the “new art history,” and later, the concept of visual culture. Foucault showed that the place that everyone is looking at is the center because the king is there, noting:
the triple function it fulfils in relation to the picture. For in it there occurs an exact superimposition of the model’s gaze as it is being painted, of the spectator’s as he contemplates the painting, and of the painter’s as he is composing his picture. 1
The mirror reflects the models that the painted painter is working on. It also makes visible by implication the place from which the real Velázquez worked. And it is the same place where we now stand to look at the finished painting. Foucault observes:
That space where the king and his wife hold sway belongs equally well to the artist and to the spectator: in the depths of the mirror there could also appear—there ought to appear—the anonymous face of the passerby and that of Velázquez. 2
So, the “mirror” does not obey the laws of optics so much as it does the laws of majesty, like the painting itself. The seventeenth century was a period in which monarchs around Europe claimed the power of absolutism. That is to say, they were more than just people. Kings were God’s representatives on Earth, symbolized by their being anointed like a priest during the coronation ceremony. Combining secular and spiritual power, the absolutist monarchs claimed overwhelming power that was centered in their very person.
How, then, should the king be shown to convey a sense of this power? Not every individual person that happened to become a king or queen was impressive. Even the most powerful have their moments of weakness, illness, and decline. Against the fallible individual person of the king, European royalty devised the concept known as the body of the king, which we can call majesty. Majesty does not sleep, get ill, or become old. It is visualized, not seen. Any action that diminished majesty was a crime called lèsemajesté, violating majesty, which could be severely punished. It even became a criminal offence to take a piece of paper with the king’s name on it and crumple it up. Physical attacks on the monarch were met with truly spectacular punishments because it was a double attack on the person of the king or queen and the institution of majesty.
Las Meninas is invested throughout with this power, making the image of the king at least the equal, and in some ways the superior, of the king himself. It also makes a set of claims for the power of the artist by association. As we have seen, the “mirror” is not optically accurate. Art historians like Joel Snyder have shown that the arrangement of perspective in the painting does not in fact converge on the mirror, but on the arm of the man standing in the open doorway to its right as we look at it (1985). Although the scene appears to show a mirror reflecting the king, it actually shows the mirror reflecting Velázquez’s painting of King Philip. It is possible that Velázquez’s perspective was not so precise or that he wanted to create a visual trap for his audience. Whatever you believe, the “mirror” shows something that the spectator would not usually be able to see—either the painting that the artist is working on, as Snyder has it, or the king and queen standing in front of it, as Foucault had it.
So, the mirror misrepresents, but it also shows a world of possibility. Las Meninas makes a tremendous claim for the power of the artist, both literally and metaphorically. The remarkable skill of the piece makes it clear that the painter is capable of accomplishments others are not. Only twenty years earlier, Velázquez had to pay the same kind of tax on his art that shoemakers did on their shoes. Here Velázquez claims the power of majesty for art by association and by depiction. He also put a red cross on his costume, indicating his claim to the status of nobility, before he could actually claim to be a noble in real life. Today, when it is common to see paintings sell for millions, even hundreds of millions, the elite status of the artist is taken for granted. It is, in fact, a relatively new and unusual idea that arose first in the imperial nations of the modern world.
Las Meninas plays with what we can see and what we cannot. It keeps out of sight the source of the Spanish monarchy’s power and authority, namely its empire in the Americas. Louis XIV (1638–1715), the absolutist king of France who married the older half-sister of the Infanta seen in Las Meninas, had an obsidian mirror in his Cabinet of Wonders, said to have been plundered from Moctezuma II himself, the last Aztec emperor (ruled 1502–20). Obsidian is a material formed by cooled lava that is both black and reflective. Mexican artist Pedro Lasch, who has worked with the black mirror, emphasizes that “in pre-Columbian America, as in many other cultures, black mirrors were commonly used for divination. . . . The Aztecs directly associated obsidian with Tezcatlipoca, the deadly god of war, sorcery, and sexual transgression.” 3 If the European mirror image was a place of power, its American equivalent added violence, sexual ambivalence, and storytelling to the imperial mix.
Figure 9. Lasch, Liquid Abstraction
In both the pre-encounter Americas and in medieval Europe, the mirror was a place of divination, where fortunes were told and where contact could be made with the dead and other spirits. In short, the mirror is a visual bridge between past, present, and future.
The imperial portrait in the absolutist era (1600–1800) was, then, never just one image. The portrait of the individual who happened to be king also depicted the majesty of the king, or the power of representation itself. The self-portrait of the artist claimed that art was the work of nobility, not artisans. The mirror reflects either the real king and queen or the painted portrait of the king. Or, in some not quite mathematical but nonetheless perfectly intelligible sense, both. The black mirror and the optically incorrect painted mirror show us how things are now, but are also a place to access the past and the future. These reflections and images were a combination of theater, magic, self-fashioning, and propaganda that were key to sustaining royal power.
THE PORTRAIT AND THE HERO
When the old monarchies collapsed during what can be seen as the long age of revolution (1776–1917), a new “frenzy of the visible” accompanied and was part of the social transformation (Comolli 1980). Across this era, dramatic inventions of new media like lithography, and especially the various processes we call photography, portraits, and self-portraits, seemed to revolutionize the visible. Visual media were democratized. Until this time, the ordinary person might have seen visual images in church, on coins, at parades, or in carnivals. By the mid-nineteenth century, there were new museums of art; illustrated newspapers and magazines were being published; and visiting-card photographs could be bought cheaply. New ways of being came to be imagined and visually represented, including the modern artistic “genius,” nearly always male, but also the woman artist. The heroic artist took some of the aura of the king (or queen) and transferred it to him- or herself. Brought down to Earth, the self-portrait became the picture of a hero.
In the last years of absolutism, the new order was already emerging. Royal artist Élisabeth Vigée-Lebrun painted portraits of the French queen Marie Antoinette. She also painted a number of self-portraits. To borrow a cue from John Berger, can you see which is which? (See figures 10–11.)
Both women look out at the viewer directly from the painting, against a scumbled background of loosely handled nonrepresentational paint. Both are dressed as fashionable, modern women in the loose style of the period, with their finely handled sashes showing the skill of the artist. Perhaps the informality of the pose with her child allows us to see Vigée-Lebrun in her self-portrait Madame Vigée-Lebrun and Her Daughter Julie (1789). The portrait of Marie Antoinette (1783) became the subject of a scandal precisely because of its informality. At the same time, by so blurring the difference between the queen and the artist, Vigée-Lebrun claimed a new level of equivalence between the two.
In their classic study Old Mistresses—the title is a pun on the phrase “Old Masters,” used to mean distinguished artists of the past with the implication that such artists would be men—Rozsika Parker and Griselda Pollock studied the history of women artists (1981). The self-portrait with her daughter raised particular issues because women were not even supposed to be artists, according to the received prejudice, so a painting by a woman showing a woman artist was doubly defiant. Parker and Pollock described how in Vigée-Lebrun’s Self-Portrait:
The novelty [of the painting] lies in the secular and familial emphasis, the Madonna and Child of traditional iconography replaced by mother and female child locked in an affectionate embrace. This portrait of the artist and her daughter elaborates that notion of woman, emphasizing that she is a mother. 4
Figure 10. Vigée-Lebrun, Marie Antoinette
Vigée-Lebrun had taken the Christian image of the Virgin Mary and Infant Jesus and given it a secular and contemporary spin. Notably, both the artist and her daughter look out at us confidently, unlike the traditional downcast glance of the Madonna in paintings by artists like Raphael. Still, as Parker and Pollock pointed out, there was a Catch-22 here. In celebrating her role as a mother, unusual in the period when women would often leave their children with wet nurses, Vigée-Lebrun’s picture seems from our perspective like a cliché. The restrictive doctrine of the woman as the domestic angel by the hearth, caring for children but not active professionally, was actually a creation of the nineteenth century. For modern feminists, trying to escape what Betty Friedan famously called “the feminine mystique” (1963), Vigée-Lebrun at first looked like more of the same. It took Parker and Pollock’s close attention to context and detail to see her work differently.
Figure 11. Vigée-Lebrun, Madame Vigée-Lebrun and Her Daughter Julie
If the nineteenth century visualized women as domestic helpmates, their counterpart was the idealized great man, or Hero, as imagined by the historian Thomas Carlyle. For Carlyle, writing in 1840, “great men make history” (1840). Artists also conceived of themselves as heroes in different ways. What did the modern artist hero look like? In 1839, Louis Daguerre in France and William Henry Fox Talbot in Great Britain finally produced photographs that “fixed,” meaning that the light-sensitive surface stayed as a visible image, rather than blacking out. Another French practitioner, Hippolyte Bayard, also invented a photographic process at this time. Doomed to the margins of photographic history because his colleague Daguerre was credited with the invention, Bayard nonetheless might be credited with inventing the selfie in his Self-Portrait as a Drowned Man (1839–40). He also invented the photographic fake because he was not, of course, actually dead.
Like many a Romantic hero before him, following the example of the poet Werther who committed suicide in Goethe’s enormously successful 1774 novel The Sorrows of Young Werther, Bayard pretended to prefer death over dishonor. His photograph is what the writer Ariella Azoulay has called an “event” (2008). It presupposes that the community watching it can imagine the heroic narrative of the author’s suicide and understand his disappointment. Some people even thought that Bayard really was dead and discussed how the dark skin on his hands and face was the consequence of drowning, rather than of exposure to the sun.
Figure 12. Bayard, Self-Portrait as a Drowned Man
The painter Gustave Courbet appropriated the idea of the artist’s suicide for his own self-portrait The Wounded Man (1845–54). As he, too, was living in Paris at the time, it is quite possible that he saw or heard about Bayard’s photograph.
Figure 13. Courbet, The Wounded Man
Here the artist has apparently stabbed himself but found time to put the sword back up against the tree behind him. Of course, we are no more supposed to think of this painting in this realistic way than we are the photograph. Marshall McLuhan later suggested that new media take the content of old media, such as television adapting theatrical plays to create TV drama (1964). Here, though, the new medium seems to have influenced the old. Courbet had moved from rural France to Paris just in time for the multiple revolutions of 1848, which he supported. By 1855, the revolution had failed and suicide was perhaps the only option left to the true revolutionary. Courbet issued a manifesto at his one-man exhibition that year, declaring: “To know in order to be able to do, that was my idea.” In this view, painting, like photography, depicts knowledge and leads to action, or the event. For Bayard and Courbet alike, the artist was the hero, the person capable of creating an event, even at the (fictional) cost of their own life.
It’s a seductive idea. Writing in the aftermath of the revolutions of 1968, the art historian T. J. Clark used Courbet as his example of “a time when political art and popular art seemed feasible.” He stressed both Courbet’s involvement with politics and the influence of popular media on his painting, arguing that popular art shows “the essentials of a social situation.” As with Parker and Pollock’s work, his ideas are so accepted now that it is hard to appreciate how innovative the approach was in 1973 when his book The Image of the People was first published. Art historians began to look at popular prints, photographs, and other mass-produced visual material alongside painting and sculpture, a means of research known as social art history. For two decades afterward, social art history and visual culture studies worked closely together before visual culture became a separate area of study around 1990, largely due to the rise of digital media.
Another reason that division occurred was the increasing difficulty of deciding what was essential, to use Clark’s term, in a given moment. The transformation of the arts and humanities since 1968 has been the result of a succession of groups pointing out that they have been overlooked and that their interests need to be taken into account. And then people look back into the historical record and discover that this group was there all along. One example is a self-portrait by the French Impressionist Henri de Toulouse-Lautrec, usually remembered for his depictions of Paris nightlife. Another way to understand his work would be as an artist with disabilities. His Self-Portrait Before a Mirror (1882) challenges the conventions of the genre and its interpretation.
Figure 14. Toulouse-Lautrec, Self-Portrait Before a Mirror
Toulouse-Lautrec deliberately painted his reflection in a mirror, rather than just using a mirror to make a self-portrait as was traditional. The reflected candlestick removes any doubt as to whether the frame indicates a mirror or a window, as in the Velázquez, for Toulouse-Lautrec clearly wanted us to recognize it as such.
At the same time, the painting both conceals and reveals the artist. By using a mirror standing on a mantelpiece, he shows us only his head and shoulders. He might have used this device to conceal his disability. For, either as a result of childhood accidents or a congenital condition, Toulouse-Lautrec had an adult upper body but the legs of a child. He depicted himself in the Self-Portrait as just protruding into the mirror, leaving the top half of the mirror empty, indicating to the observant viewer that he was very short. He might have chosen to adjust what he saw, so as to fill the “screen,” like a present-day actor or politician standing on a riser to seem taller. Whereas for dominant groups the mirror is often a site (and sight) of affirmation, for people who look or feel different, the mirror can be a site of trauma. Toulouse-Lautrec’s self-portrait confronts that sight without making himself the object of a freak show. I use the term deliberately because in the period people with disabilities were literally exhibited as “freaks” to paying audiences (Adams 2001). Toulouse-Lautrec refuses to cater to this voyeuristic desire to see, but does not distort the reality of his difference. It’s a different kind of heroism and one that is not immediately recognizable as such to others.
THE MANY SELVES OF POSTMODERNISM
In the late 1970s, a new idea began to circulate in European and North American intellectual and artistic circles. The modern period, defined by its heroic artists, radical political divides and the dramatic expansion of the industrial economy, seemed to be over. Beginning with thinkers like the French philosopher Jean-François Lyotard, artists and writers started to think about a “postmodern condition” (1979). At the time, there were two ways of understanding postmodernism. One view saw it as a break with the modern that could be given a specific date. Another more widespread view held that there had always been a postmodern side to modernism, which questioned its certainties. The prime example of a postmodern modern artist was Marcel Duchamp. He took manufactured objects like a bicycle wheel or a urinal, installed them in an art gallery or exhibition, and declared the result to be art. In other words, art was whatever someone who wanted to be an artist called art. Whether it was the product of individual skill or talent was beside the point. Duchamp called the results “readymades,” perhaps the best known of which is Fountain (1917). It was made from a urinal, stood on its end and signed “R. Mutt.” The artist was no longer a hero.
When Duchamp made Fountain, the First World War had devastated Europe, with millions dead. The Russian Empire had collapsed into the Revolution that would create the Soviet Union. No wonder Duchamp and other artists thought things had changed. The war had even created a new set of mental illnesses, for which the term shell shock was coined, in which sufferers seemed to experience a traumatic moment over and over again, or became blind despite there being no injury to their eyes, and so on. The “self” no longer seemed so secure. Perhaps there was more than one self in each person. In 1917, Duchamp developed this idea to create a new readymade self-portrait at a store on Broadway in New York City. Using a hinged mirror, the photo booth created a five-way portrait in three copies.
Figure 15. Duchamp, Self-Portrait in a Five-Way Mirror
It was a perfect outlet for him. It was visually amusing but had a serious point—Duchamp did not see himself as one but as many selves. Whereas heroic modern artists simply depicted their own image, postmodern artists made themselves their primary project. Nor is this a once-and-for-all remake but it can be done over and over. It is not an event but a performance.
Duchamp continued to experiment with his self-image. He collaborated with his friend Man Ray to create a self-portrait as his alter ego Rrose Sélavy. In order to get the pun, you need to read this name with a French accent, when it will become “Eros, c’est la vie” meaning “Love is life.”
Figure 16. Man Ray, Marcel Duchamp as Rrose Sélavy
As if intent on making the point that Rrose Sélavy was not his “real” identity, Duchamp made several notably different versions of this self-portrait. The one shown here is perhaps the most feminized, in the style of a society portrait or fashion illustration. The implication of the portrait, like all drag, is that gender is a performance. Like seeing, it is something we do, rather than something that is inherent and unchangeable. Rrose Sélavy seems feminine because of her clothes, makeup, and jewelry, as well as the way she holds her body. In her classic study The Second Sex (1947), French writer Simone de Beauvoir put it pithily: “One is not born, but rather becomes, a woman.” 5
Such public claims to feminist and queer identities as something we do, and can therefore change, combined with the rise of mass personal image-making technologies and personal computing, were central to the creation of the field of visual culture. It is important to recognize how transformative these interfaces were in what has become known as the postmodern period (1977–2001). Far from being simply negative critiques, they inspired some remarkable creative accomplishments, such as the work of the New York artist Cindy Sherman, whose awareness of feminism, combined with her DIY photographic aesthetics, influenced a new generation of artists, writers, and academics. In the art world, she was known as part of the Picture Generation. Her work was also key for the study of visual culture.
From her time as a graduate student in Buffalo, New York, Sherman has repeatedly photographed herself in an ever-changing variety of poses and attitudes to explore how we make ourselves and make our gender. In her classic early series called Untitled Film Stills (1977–80), now owned by the Museum of Modern Art, New York, Sherman set out to counter the construction of women as passive objects of male desire. In the heyday of Hollywood cinema, film stills were used as a form of publicity for new movies. They would be displayed outside cinemas or used in print, whether as advertising or to illustrate a review. Cinema fans used to collect these stills, in the way that baseball fans collected baseball cards. In 1977, the classic Hollywood studio film already felt dated, so Sherman’s work was really about how the then-present wanted to distinguish itself from the time in which women were only to be “looked at” (Berger 1973). She created a long series of black-and-white photographs of herself in different costumes, makeup, and situations, exploring the ways in which cinema looks at women.
Sherman began her project just two years after film critic Laura Mulvey had coined the expression “the male gaze” in a study of classic Hollywood cinema (1975). Mulvey saw that a gaze (that is, a dominant way of seeing) is built into cinema, which can be that of the actors but is also part of the medium itself. The man’s role in the film, Mulvey says, is “the active one of forwarding the story, making things happen.” She adds that the man in the story “controls the film fantasy and also emerges as the representative of power in a further sense: as the bearer of the look of the spectator.” 6 Men look at the action through the eyes of the male hero and women are obliged to do the same, a form of compulsory gender manipulation. The cinematic gaze also performs the action in which “I see myself seeing myself,” that sense that we sometimes have of being looked at, even if we can’t actually see the person doing the looking. For Mulvey and other feminists, women experience this condition all the time in relation to how they look and act. By freezing the film and making us think precisely about how and why we are looking or being looked at, Sherman made this performance visible.
Figure 17. Sherman, Untitled Film Still
If we look at this example from 1978, it is clear how skillfully Sherman has put the image together. The low camera angle makes it seem that we see the woman in the picture (who is always Cindy Sherman herself), but that she does not see us. She appears isolated and trapped in the cityscape that hems her in. Using sharp side-lighting and close focus, Sherman makes her body stand out from its surroundings. While this might have been a confident pose if she were looking directly at the camera, she is instead looking away to the side at something we cannot see, her lips slightly parted, creating a sense of threat and anxiety. In the classic Hollywood mise-en-scène (the creation of both the individual shot and the overall feel of the film), the victim is always isolated like this before being subjected to violence. At first we can’t help but feel anxious on the woman’s behalf. Then we realize that it is Sherman herself who has created the scene and that she is using it not to present herself as a victim but to make us aware of the ways in which cinema depicts women as objects to be played with. By manipulating back, Sherman and many other artists of her generation, such as Barbara Kruger and Sherrie Levine, claimed the right to be the selves they wanted to be. Her photographs re-perform the way women are represented to say something important about the actual experience of women in daily life.
Photographic self-portraits can also be a diary and a record of what has happened. In a counterexample to Sherman’s role-playing, the New York photographer Nan Goldin kept such a diary over many years. It recorded her radical and alternative counterculture circle in 1980s New York. Goldin would show her photographs in a darkened room as a slide show, using a carousel slide projector in those pre-PowerPoint days. Accompanied by a soundtrack of music from the Velvet Underground and other downtown classics, the performances would last for about an hour, immersing the audience in a visual narrative of Goldin’s life. Viewers would come to recognize her friends and her boyfriend. So, it was a visual shock in 1984 when she created a photograph of herself with visible bruising on her face. A second shock was found in its title: Nan One Month After Being Battered.
Figure 18. Goldin, Nan One Month After Being Battered
As damaged as her face appears to be in the photograph, we then realize that she has had a month to recover, so the initial violence must have been dreadful. Goldin warns that we can depict ourselves but it does not always mean we can protect ourselves.
While one strand of postmodern art and thought highlighted the illusions of modern consumer society, work like Goldin’s inspired a new generation of artists and writers to concentrate on how gender, race, and sexuality were experienced in everyday life. In a word: performance. Performance, in the classic definition of scholar Richard Schechner, is “twice-performed behavior.” 7 Schechner claimed that all forms of human activity are a performance, assembled from other actions we have taken in the past to create a new whole. A performance might be an artwork, it might be a chef cooking a dish or a barber cutting hair. Or then again it might be anyone whatever giving a performance of their gender, race, and sexuality in everyday life.
PUTTING THE OTHER IN SHADE
It was in 1990 that this visual culture of performance became visible in the United States, extending from the avantgarde to academia and the mainstream. First, Jennie Livingston’s remarkable documentary Paris Is Burning (1990) made the subculture of the queer voguing trend in Harlem available to art cinema audiences. When the pop star Madonna adopted the style for her hit “Vogue” that same year, especially in its compelling video, the global media audience saw what it meant to “strike a pose.” In a related vein, the philosopher Judith Butler published her classic book Gender Trouble, which showed how drag reveals gender itself to be a performance (1990). And in both the United States and United Kingdom, degrees in visual culture were offered at the University of Rochester and Middlesex University for the first time.
Voguing was a dance form created at the balls organized by gay African American and Latino men in Harlem. These balls are a mixture of dance and performance in which participants “walk” in competition for prizes. According to the narrative offered in Paris Is Burning by the performer Dorian Corey, balls initially featured only drag queens and later expanded to categories like film and television stars and all manner of categories taken from “real life.” These last included military personnel, executives, and students. All these categories were felt to be ways of being or careers that were desirable but not open to gay men of color. (Queer had yet to become an affirmative term, but it was also in 1990 that the Queer Nation activist group was formed.) The goal was to present “realness,” meaning that if you were out in public away from the ball, you could pass as really being whatever category you were representing. The ball was a mirror to the real world in the sense that it was reversed: here gay African American and Latino men were in the ascendant, becoming legendary in their houses.
The 1980s saw the rise of voguing at the balls. Voguing is a competitive form of dance, using frozen and exaggerated positions in time to the house music of the period. In the earlier styles, walkers looked to find a “read” on their rivals, meaning a flaw in their costume or appearance. One read in the film involved a debate as to whether a walker in an upscale male category was using a woman’s coat and was therefore disqualified. In the ballroom, to be read was to fail. It is to be seen by the other as they wish to see you, rather than as you see yourself. You wanted to simply appear to be what you appeared to be. In short, for your performance to succeed so well that it becomes invisible as a performance. The read reveals otherwise.
By contrast, a vogue makes you see yourself differently. As voguer Willie Ninja demonstrates in the film, it might involve a mime in which the dancers check their appearance in a “mirror” and then hold that (nonexistent) mirror up to opponents to show them how deficient they are by comparison. Ninja’s vogue created what was called a shade, making you see yourself seeing yourself in the pretend mirror. It is therefore a more devastating move because you are convinced of its truth, whereas you could—and people can be seen in the film doing just this—deny the “read.” The read and the shade are operations of the gaze but with a difference—or queered, as we might say now. Unlike the male gaze, where maleness was assumed because of genital difference, the assumption of gender here is taken on voluntarily and then performed. Some of the men assume female roles, others male, some vogue. In Paris Is Burning, we see what happens when the gaze is both queered and used by people of color. Like her or not, Madonna’s hit song and video “Vogue” brought the ballroom subculture and the possibilities of performing yourself to global public attention.
In Gender Trouble, Butler held such drag performances to demonstrate that “gender is the cultural meaning that the sexed body assumes.” By this she meant that we cannot draw a direct equivalency between people’s gender and their body’s sexual organs. Further, she emphasized that bodies do not fall neatly into two sexes. People of intersex constitute some 1.7 percent of live births (in 2013, Germany legislated that people could be designated as “indeterminate gender” at birth). Butler’s point is that even if this is rare, it shows that there is not an absolute equivalency between types of body and gender. Intersex people can make a choice or have one made for them. Drag performers and transgender people can do so in other ways. Judith Halberstam called one such option “female masculinity,” the way that some women deploy masculinity as the cultural meaning of their bodies (1998). If we decide a person’s gender by their hair, clothes, and style, it is a visual analysis rather than a scientific deduction. As Butler put it, the question becomes: “What are the categories through which one sees?” Even seeing a naked body might not be enough to decide “whether the body encountered is that of a man or a woman,” 8 and what those decisions mean. Although it is a difficult and serious book, Gender Trouble was a crossover hit, as likely to be read in nightclubs as in seminars. It has been part of a transformation of attitudes to gender and sexuality. In so doing, it also helped shape the study of what has come to be called visual culture.
Using the self as a performance that can be photographed has had a variety of dramatic effects. In 1977, in the Central African Republic, a young African photographer named Samuel Fosso was beginning to use leftover film in the photographic studio where he worked to make posed self-portraits. In the same way that Sherman and others have explored how gender is imposed on our bodies from outside, Fosso has visualized how his body is Africanized and racialized.
This process had been analyzed by Frantz Fanon, the Caribbean writer and activist. Sometime in the early 1950s, Fanon was traveling in a French train, headed for his psychiatric training. He described the experience in his book Black Skin, White Masks (1952). A child saw him and cried out: “Look, a Negro! Mama! Look, a Negro! I’m scared.” Fanon recalled how “the Other fixes you with his gaze, gestures and attitude.” This is a form of photograph, a colonizing power of looking, or, in terms of the ballroom a “read.” Fanon felt forced to “cast an objective gaze over myself, [I] discovered my blackness.” He finds himself seeing himself as the white other sees him—a “shade.” He feels fixed, as if photographed by what he calls “the white gaze.” 9 Under that gaze, he cannot be seen for himself but only as a set of clichés and stereotypes.
Fosso set out to undo the white gaze by making fun of it in his self-portraits. He has described this particular self-portrait as follows:
I am an African chief, in a western chair with a leopard-skin cover, and a bouquet of sunflowers. I am all the African chiefs who have sold their continent to the white men. I am saying: we had our own systems, our own rulers, before you came. It’s about the history of the white man and the black man in Africa. 10
Figure 19. Fosso, The Chief (the one who sold Africa to the colonists)
Fosso alluded to the fact that the tribal system, dominated by chiefs, was a creation of colonial powers, rather than being “traditionally” African. As we can still see today, colonial powers have preferred to rule through intermediaries. Often, these people had no legitimacy of their own and so they relied on the colonizer’s authority and armies. Mobuto Sese Soko, dictator of Zaire from 1965 to 1997, made the leopard skin that we see in Fosso’s photograph into a visual cliché of this kind. “Zaire” was the name Mobuto gave to the former Belgian Congo, but his supposedly authentic leopard-skin caps were actually made in France. For all his rhetoric of authenticity, Mobuto’s regime was really enabled by the Cold War. The United States tolerated his abuses because he was anticommunist at a time when Africa was far more sympathetic to socialism than capitalism. Fosso wants us to see that even after the end of the formal colonial regimes, Africa is still shaped by them, even as he ridicules such puppet leaders.
SELFIES AND THE PLANETARY MAJORITY
In the present moment of transformation, these categories of identity are being remade and reshaped. Today, claims queer theorist Jack Halberstam, “the building blocks of human identity imagined and cemented in the last century—what we call gender, sex, race and class—have changed so radically that new life can be glimpsed ahead.” 11 One place where we can catch sight of these glimpses is the selfie. When ordinary people pose themselves in the most flattering way they can, they take over the role of artist-as-hero. Each selfie is a performance of a person as they hope to be seen by others. The selfie adopted the machine-made aesthetic of postmodernism and then adapted it for the worldwide Internet audience. It is both online and in our real-world interactions with technology that we experience today’s new visual culture. Our bodies are now in the network and in the world at the same time.
Some see the new digital performance culture as self-obsessed and tacky. It is more important to recognize that it is new. The only thing that we know for sure about the young urban global network is that it will change frequently and unpredictably, using formats that may make no sense to older generations. The selfie is a new form of predominantly visual digital conversation at one level. At another level, it is the first format of the new global majority and that is its true importance.
The selfie took off following the placement of a good-quality front camera on the iPhone 4 in 2010, with other phones rapidly following suit. Selfies could now be taken outside or using a flash without the resulting burst of light dominating the picture, as it did in pictures taken in a mirror, which were a staple on the social networking website MySpace in its heyday from 2003 to 2008. A selfie is now understood as a picture of yourself (or including yourself) that you take yourself by holding the camera at arm’s length. A set visual vocabulary for the standard selfie has emerged. A selfie looks better taken from above with the subject looking up at the camera. The picture usually concentrates on the face, with the risk of making a duck face, which involves a prominent pout of the lips. If you overdo it and suck in your cheeks too far, voilà, the duck face. These poses are remaking the global self-portrait.
Despite the name, the selfie is really about social groups and communications within those groups. The majority of these pictures are taken by young women, mostly teenagers, and are largely intended to be seen by their friends. In an analysis for the website SelfieCity, media scholar Lev Manovich has shown that—worldwide—women take the majority of selfies, sometimes by overwhelming margins, as in Moscow, where women take 82 percent of all selfies (SelfieCity). They are then shared in social circles that are likely to be mostly women, regardless of sexual orientation. As fashion critics have long asserted, (straight) women dress as much for each other as for men and the same can be said of the selfie. Some have suggested that the premium on attractiveness indicates that the selfie is still subject to the male gaze. Sociology professor Ben Agger has claimed in media interviews that the selfie is the male gaze gone viral, part of what he calls “the dating and mating game.” But trends for #uglyselfies and to show nonconventional selfies are equally apparent. By the nature of the medium, any one person can only see a very limited number of the total selfie production, and even then needs a good deal of extra information to be confident as to what is being seen.
As the format rose to prominence, there was certainly a moral panic in the media about selfies (Agger 2012). A typical comment by CNN commentator Roy Peter Clark declared: “Maybe the connotation of selfie should be selfish: self-absorbed, narcissistic, the center of our own universe, a hall of mirrors in which each reflection is our own.” 12 In Esquire, novelist Stephen Marche went a step further, claiming: “The selfie is the masturbation of self-image, and I mean that entirely as a compliment. It gives control. It gives release.” 13 These metaphors are slightly convoluted. Narcissus spent his life looking at himself, but he did not release a copy of his image for others to look at. Selfies are, like them or not, all about sharing. Many celebrity selfies, like the naked photograph sent out by journalist Geraldo Rivera, have been greeted with scorn. At a private level, a selfie might be liked by some friends but disliked or even satirized by others. This is not masturbation. It’s an invitation to others to like or dislike what you have made and to participate in a visual conversation.
Something is happening here, as the numbers suggest. In Great Britain alone, 35 million selfies were being posted to the Internet each month by 2013. By mid-2014, Google claimed that 93 million selfies were being posted worldwide every day, over 30 billion a year. In her analysis of the photographs on SelfieCity, media scholar Elizabeth Losh found four technical commonalities. First, these pictures are all taken from close distance. You could use a remote but people choose not to do so: the close-up is part of what makes a selfie. The selfie shows that our bodies have become incorporated into the digital network and are interacting with it. To use a remote or timer would be to introduce a distance between the body and the network. As a result, the device that takes the selfie is often visible in the picture. Such mirroring is relatively rare in painting and traditional photography but is not felt to be intrusive in a selfie. By the same token, selfies often use filters like those provided by Instagram that are not designed by the photographer.
Losh sees this “authoring” by preformed tools as taking over from traditional authorship, in which taking decisions as to how an image would be rendered was central. This leads her to conclude that machines are starting to do our seeing for us, using their defaults that we may not understand to shape our perception. 14 It’s not altogether new, as we have seen with the Duchamp example. Even in professional contexts, the settings on the Leica camera determined the appearance of classic photojournalism, generating a sharp focus in the foreground and a blurry background. By the same token, the rich color and depth of field of the current Canon G series has set the visual terms for “prosumer” photography. The selfie is different by virtue of scale. When Duchamp played with machine vision, it was known to a tiny circle of his associates. The machine vision of the iPhone was used by 500 million people as of March 2014, according to Apple, with a million new phones being sold every three days.
There are really two kinds of selfie, in terms of content. One is a performance for your digital circle. A celebrity selfie, like those of Kim Kardashian, is intended to maintain and extend the celebrity of its subject. The celebrity selfie is a continuation of the film still and advertising shot that pretends to be the work of its star. Just as no one who receives a mass e-mail from “Barack Obama” assumes that the president actually wrote it, the celebrity has not posed at random. Both have undoubtedly some oversight role in the product but it is a controlled form of performance. Far more common, although invisible to those not directly involved, is the selfie as digital conversation, shared via apps like Snapchat.
There are many warnings that the Internet archives material for ever and so a silly, stoned or sexual photograph posted to Facebook could cost you a scholarship or a job. Although the few documented instances seem to show that people are mostly fired for writing disparaging things about their current job, one poll in 2013 reported that 10 percent of sixteen- to twenty-four-year-olds claimed to have lost a job because of things they posted online. As a result, many have shifted to using apps like Snapchat for photographs so that Internet users cannot find them once they have been deleted. Once you open a snap, you have ten seconds to look at it before it automatically deletes itself. Snap use rose from 200 million a day in June 2013 to 700 million a day by May 2014. That’s over 250 billion snaps a year, seen only by the recipients. Users can send snaps to friends of their choice and, unlike e-mail or Facebook, Snapchat also tells you if your friends have looked at the snap or if they have taken a screenshot image. Snapchat’s self-image ad (Figure 20) reflects its target audience of young women (perhaps unsurprisingly, they are conventionally attractive, white, and blond, in this case). The snap they are making is of both of them, presumably intended for their friends.
Snapchats can also convey messages, share information and are designed to sustain conversations. The snap has taken over for many young people from the Facebook status update, just as Facebook pushed out MySpace. The interest for us is not in the specific platform but the development of a new visual conversation medium, usually relayed by phones that are used less and less for verbal exchange. The selfie and the snap are digital performances of learned visual vocabularies that have built-in possibilities for improvisation and for failure. Networked cultures are intensifying the visual component and moving past speech.
Figure 20. Snapchat ad
Together with Snapchat has come the Vine, the six-second video message. Vine seems to be the logical outcome of people wanting to get straight to the “good part” on YouTube. In six seconds, there’s not much time to get bored, you would think. After a while, though, many Vines seem the same—sporting feats, pet and animal tricks, accidents that are supposed to be funny. There are also people using them very creatively as short movies and inevitably corporations have started making ads. Vine has been bought by Twitter, which makes sense. The 140-character message is now supplemented by the six-second video.
Now we see the digital performance of the self becoming a conversation. Visual images are dense with information, allowing successful performances to convey much more than the basic text message, whether in a single image or short video. The selfie and its other forms like Snapchat have given the first visual form to the new global majority’s conversation with itself. This conversation is fast, intense, and visual. Because it draws on the long history of the self-portrait, it’s likely that the selfie in one form or another will continue to play a role in shaping how to see people for a long time to come. The selfie shows how a global visual culture is now a standard part of everyday life for millions that takes the performance of our own “image” as its point of departure.
CHAPTER 2
HOW WE THINK ABOUT SEEING
Seeing is something we do, and we continually learn how to do it. It is now clear that modern visual technology is a part of that learning process. Seeing is changing. A widely cited 2006 study from the University of Rochester showed that playing video games improved both peripheral and central visual perception. In other words, playing visual games makes you see better. There are many such reports of improved hand-eye coordination. In 2010, another Rochester study showed that gamers make faster and more accurate decisions based on sensory perceptions. Lead author Daphne Bavelier (now at the University of Geneva) describes this as “probabilistic inference,” meaning the kinds of decisions we make based on incomplete information, such as choices made while driving (Bavelier Lab). The point here is that we do not actually “see” with our eyes but with our brain. And we have learned that in turn by becoming able to see how the brain operates. What we see with the eyes, it turns out, is less like a photograph than it is like a rapidly drawn sketch. Seeing the world is not about how we see but about what we make of what we see. We put together an understanding of the world that makes sense from what we already know or think we know.
It has long been realized that we do not see exactly what there is to be seen. The ancient Greek architects of the Parthenon in Athens designed the sides of their columns with a slight outward curve (entasis) as they rose in order to convey the appearance of being perfectly straight. In the seventeenth century, Western science began to distinguish between biological sight, which sees what there is to see, and cultural judgment, which makes sense of it. The philosopher and natural scientist René Descartes pointed out that when we look at a work of art drawn in perspective, we perceive what is actually an oval as a circle. He interpreted this as evidence that judgment corrects the perception of sight. This understanding was the basis of modern observational science. Descartes moved the knowledge of the world from being derived from the classical thought of ancient Greece and Rome to what each person observes in his famous aphorism “I think, therefore I am” (Descartes 1637). Only the fact that we think indicates that we exist. Everything else must be doubted and tested.
Descartes used vision as his example. The ancient Greeks and Romans had two contradictory theories to explain vision. One said that the eye threw out rays to “touch” the things we see. The problem with this idea is that we can see very distant objects immediately: so, how does vision throw its rays so fast? Another theory said that objects emitted little copies of themselves that got smaller and smaller until they entered the eye. The problem here is that large objects can be seen close up and enormous objects like mountains can also be seen: how did the copies get small enough, quickly enough, to enter the eye? No one could solve these problems and they did not really try to do so because light was held to be divine and so not subject to human understanding.
Descartes believed that the existence of God was the only way to guarantee that our observations are not simply delusions or the ravings of the insane. So, he tested everything. In 1637, he produced a famous diagram showing how vision was mathematically possible; it is still shown in many art and visual culture classes today.
He showed light entering the eye as a set of geometric lines. He solved the question of how large objects can be seen by showing that the rays are refracted by the eye’s lens and converge on the retina at the back of the eye. However, this is not seeing. The image produced on the retina was interpreted by what Descartes called the sense of judgment. The drawing represents judgment as an elderly judge, assessing what there is to be seen and coming to a decision about it. Vision was understood as a courtroom, in which the eye presents evidence for the judge to decide. (There was no jury, as in the French courts of the time.) Descartes’s breakthrough not only helped us for the first time to understand how vision was possible. It also raised the importance of seeing to a new level as the key sense in modern science, which centers on the observed experiment.
Figure 21. Descartes, “Vision,” from La Dioptrique
In our own time, we are witnessing how neurology, a fast-developing part of biological science, sees the body and mind as integrated systems and people as communal, social beings connected by empathy. The metaphors here are not taken from the courtroom but from computer networks. It is a very different way to see ourselves and to think about seeing. According to this perspective, we learn how to become individuals as part of a wider community. This outcome is the intriguing result of the revolution in studying the brain, which many would consider to be the most individual organ of all, and in particular how humans and other primates see. My point is not that modern neuroscience is the final version of the “truth” and all other previous understandings have been shown to be wrong (although some neuroscience boosters do come close to saying this). Rather, as we shall see, neuroscience and its ways of visualizing the mind and human thought are becoming the vital visual metaphors of our time. It is our version of the truth, for better or worse.
VISUALIZING VISION
In the late 1990s, the psychologist Daniel Simons and his student Christopher Chabris devised what would become a famous experiment: a video test known as the “Invisible Gorilla” (Chabris and Simons 2010). Those who participated in the study were asked to watch a video and count the number of times the team wearing white passes a basketball while they play a team wearing black shirts. As this simple action unfolded, a person wearing a gorilla suit walks across the court.
Figure 22. Simons and Chabris, still from “Invisible Gorilla” video (1999)
Roughly half of the people watching did not even notice the gorilla. They were concentrating on counting. Simons attributes this to what he calls “inattentional blindness,” the inability to perceive outside information when concentrating on a task. Researchers had been aware of this phenomenon since the 1970s, as had magicians from time immemorial—“the sleight of the hand deceives the eye” because the magician distracts your attention. But it was the video that made the test so dramatic. You could test yourself and then watch the video again to see how obvious the gorilla then appears. Some people get very upset when they realize their failure.
This experiment built on the research of neuroscientists like Humberto Maturana from the 1970s. Maturana demonstrated that a frog, for example, sees very differently from the way we do. It perceives small, fast-moving objects, like the insects it eats, very clearly, while ignoring large, slow-moving things. Birds can perceive ultraviolet light invisible to humans, which allows them to see their own plumage differently than we do. However, even this seeing is not vision. Maturana stressed that living things change themselves because of their awareness of their interactions with the outside world, not just in the very long run described by evolution, but as a condition of day-to-day existence (Maturana 1980).
That is exactly what has happened in response to new media. When I show the “Invisible Gorilla” test video to students and others today, nearly everyone sees the gorilla. A population that has grown up with video games and touch screens sees things differently. Simons himself has found that when you show the video to experienced basketball players, the number seeing the gorilla jumps to about 70 percent. Simons carried out a more recent study demonstrating that some people today do not see the gorilla, based on a small sample of 64 people. Of these only 41 were previously unaware of his video. In this group, 18 did not see the gorilla, well under 50 percent. My sample group is larger in size, and compiled over several years, although not conducted as a scientific study. Perhaps my countersample, drawn as it is from participants in visual culture classes, is just more visually aware.
The capacities of the human body obviously cannot have evolved in such a short space of time. Rather, the change comes in the way we make use of visual information. In the age of industrial work, concentrating on a specific activity and ignoring distractions was highly desirable. From academic research by a student in a library to the adjustment of a machine by a factory worker, attention needed to be focused. Today, we prioritize the ability to keep in touch with multiple channels of information—multitasking is the popular term. As I write this book, people are sending me e-mails and text messages to which they expect prompt replies, regardless of what I am doing. Formerly, we were trained to concentrate on one task, meaning we might not see the gorilla, and mostly, though not exclusively, we did not. Now we are trained to pay attention to distractions and mostly, though not exclusively, we do. Neuroscience has changed the way that vision is understood. However, there’s still noticeable room for interpreting that change.
SEEING THE BRAIN
Let’s begin with how we can now “see” the brain in action. With the invention of new forms of medical imaging, especially magnetic resonance imaging (MRI) in 1977, it became possible to make “pictures” of the brain at work. Of course, no light is involved, and no drawing or other representational work is done. The magnetic field created by the machine excites hydrogen atoms in the brain (or whatever body part is being examined). As a result, they emit a radio frequency that is detected by the machine and converted into images. It is possible to imagine a species that could hear those frequencies and detect what is wrong (or not) with the person being scanned. Humans need to see something.