Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Mise en scene film studies

27/11/2021 Client: muhammad11 Deadline: 2 Day

2

Film Studies

Film and Culture Series

John Belton, General Editor

3

FILM STUDIES An Introduction

Ed Sikov

COLUMBIA UNIVERSITY PRESS NEW YORK

4

COLUMBIA UNIVERSITY PRESS

Publishers Since 1893

NEW YORK CHICHESTER, WEST SUSSEX

cup.columbia.edu Copyright © 2010 Ed Sikov

ALL RIGHTS RESERVED

E-ISBN 978-0-231-51989-2

Library of Congress Cataloging-in-Publication Data Sikov, Ed.

Film studies : an introduction / Ed Sikov. p. cm. — (Film and culture)

Includes index. ISBN 978-0-231-14292-2 (cloth : alk. paper) — ISBN 978-0-231-14293-9 (pbk. : alk.

paper) — ISBN 978-0-231-51989-2 (ebook) 1. Motion pictures. I. Title. II. Series. PN1994.s535 2010 2009033082

A Columbia University Press E-book. CUP would be pleased to hear about your reading experience with this e-book at cup- ebook@columbia.edu.

References to Internet Web sites (URLs) were accurate at the time of writing. Neither the author nor Columbia University Press is responsible for URLs that may have expired or changed since the manuscript was prepared.

The author and Columbia University Press gratefully acknowledge permission to quote material from John Belton, American Cinema/American Culture, 3d ed. (New York: McGraw- Hill, 2008); copyright © 2008 The McGraw-Hill Companies, Inc.

5

http://cup.columbia.edu
mailto:cup-ebook@columbia.edu
for Adam Orman and the other great students in my life for John Belton and the other great teachers in my life

6

CONTENTS

PREFACE: WHAT THIS BOOK IS—AND WHAT IT’S NOT INTRODUCTION: REPRESENTATION AND REALITY ONE MISE-EN-SCENE: WITHIN THE IMAGE

What Is Mise-en-Scene?

The Shot

Subject-Camera Distance—Why It Matters

Camera Angle

Space and Time on Film

Composition STUDY GUIDE: ANALYZING THE SHOT

WRITING ABOUT THE IMAGE

TWO MISE-EN-SCENE: CAMERA MOVEMENT

Mobile Framing

Types of Camera Movement

Editing within the Shot

Space and Movement

7

STUDY GUIDE: ANALYZING CAMERA MOVEMENT

WRITING ABOUT CAMERA MOVEMENT

THREE MISE-EN-SCENE: CINEMATOGRAPHY

Motion Picture Photography

Aspect Ratio: From 1:33 to Widescreen

Aspect Ratio: Form and Meaning

Lighting

Three-Point Lighting

Film Stocks: Super 8 to 70mm to Video

Black, White, Gray, and Color

A Word or Two about Lenses STUDY GUIDE: ANALYZING CINEMATOGRAPHY

WRITING ABOUT CINEMATOGRAPHY

FOUR EDITING: FROM SHOT TO SHOT

Transitions

Montage

The Kuleshov Experiment

Continuity Editing

The 180° System

Shot/Reverse-Shot Pattern STUDY GUIDE: ANALYZING SHOT-TO-SHOT EDITING

8

WRITING ABOUT EDITING

FIVE SOUND

A Very Short History of Film Sound

Recording, Rerecording, Editing, and Mixing

Analytical Categories of Film Sound

Sound and Space STUDY GUIDE: HEARING SOUND, ANALYZING SOUND

WRITING ABOUT SOUND AND SOUNDTRACKS

SIX NARRATIVE: FROM SCENE TO SCENE

Narrative Structure

Story and Plot

Scenes and Sequences

Transitions from Scene to Scene

Character, Desire, and Conflict

Analyzing Conflict STUDY GUIDE: ANALYZING SCENE-TO-SCENE EDITING

WRITING ABOUT NARRAT IVE STRUCTURE

SEVEN FROM SCREENPLAY TO FILM

Deeper into Narrative Structure

Screenwriting: The Three-Act Structure

Segmentation: Form

9

Segmentation: Meaning

A Segmentation of Inside Man STUDY GUIDE: STORY ANALYSIS AND SEGMENTATION

WRITING ABOUT WRITING

EIGHT FILMMAKERS

Film—A Director’s Art?

Authorship

The Auteur Theory

The Producer’s Role

Teamwork STUDY GUIDE: THE PROBLEM OF ATTRIBUTION

WRITING ABOUT DIRECTORS

NINE PERFORMANCE

Performance as an Element of Mise-en-Scene

Acting Styles

Stars and Character Actors

Type and Stereotype

Women as Types

Acting in—and on—Film

Publicity: Extra-Filmic Meaning STUDY GUIDE: ANALYZING ACTING

10

WRITING ABOUT ACTING

TEN GENRE

What Is a Genre?

Conventions, Repetitions, and Variations

A Brief Taxonomy of Two Film Genres—the Western and the

Horror Film

Genre: The Semantic/Syntactic Approach

Film Noir: A Case Study

Film Noir: A Brief History

Film Noir’s Conventions STUDY GUIDE: GENRE ANALYSIS FOR THE INTRODUCTORY STUDENT

WRITING ABOUT GENRES

ELEVEN SPECIAL EFFECTS

Beyond the Ordinary

Optical and Mechanical Special Effects

Computer-Generated Imagery (CGI) STUDY GUIDE: EFFECTS AND MEANING

WRITING ABOUT SPECIAL EFFECTS

TWELVE PUTTING IT TOGETHER: A MODEL 8- TO 10-PAGE

PAPER

How This Chapter Works

11

“Introducing Tyler,” by Robert Paulson GLOSSARY

ACKNOWLEDGMENTS

INDEX

12

PREFACE WHAT THIS BOOK IS—AND WHAT IT’S NOT

This book is designed to provide a basic introduction to the academic discipline known as film studies. It covers, in the first eleven chapters, the fundamental elements of formal film analysis, from the expressive content of individual images to the ways in which images link with one another; from the structures of narrative screenplays to the basics of cinematography, special effects, and sound. The book’s final chapter is a step-by-step guide to writing a final paper for the kind of course for which this textbook has been written.

Film Studies is a primer—a pared-down introduction to the field. It is aimed at beginners. It simplifies things, which is to say that the information it contains is straightforward and aimed at every student who is willing to learn it. It’s complicated material, but only to a point. The goal here is not to ask and answer every question, cover every issue and term, and point out the exceptions that accompany every rule. Instead, Film Studies tries to cover the subject of narrative cinema accurately but broadly, precisely but not comprehensively. It is a relatively short book, not a doorstop or makeshift dumbbell. It isn’t meant to cover anything more than the basic elements of formal film studies.

This book is about feature-length narrative cinema—movies that tell fictional stories that last from about ninety minutes to three or three and a half hours. It does not cover documentaries, which are about real people and events. It’s not that documentary filmmaking is not worth studying; on the contrary. It’s just that Film Studies is strictly an introduction to narrative cinema. Similarly, there is nothing in Film Studies about avantgarde films—those motion pictures that are radically experimental and noncommercial in nature. Film history is full of great avant-garde works, but that mode of filmmaking is not what

13

this book is about. People who study movies think about them in different and

divergent ways. Scholars have explored sociological issues (race, ethnicity, religion, and class as depicted in films) and psychological issues (how movies express otherwise hidden ideas about gender and sexuality, for instance, or how audiences respond to comedies as opposed to horror films), to cite only a few of the various lenses through which we can view films. Researchers can devote themselves entirely to the study of film history—the nuts-and-bolts names, dates, and ideas of technological and aesthetic innovation that occurred on a global level. Similarly, the study of individual national cinemas has provided critical audiences with a broad range of cinematic styles to pursue, pinpoint, and enjoy.

Film Studies is not about any of these subjects. It is, to repeat, a primer, not an exhaustive examination of film interpretation, though the book has been expressly designed to accompany as wide a variety of film courses as possible.

This book centers on aspects of film form. You will learn the critical and technical language of the cinema and the ways in which formal devices work to create expressive meaning. Hopefully, if you go on to study film from a psychological or sociological perspective, or explore a particular national cinema, or take an upper-level film course of any kind, you will use the knowledge you gain here to go that much deeper into the films you see and study. This book serves as a first step. If this turns out to be your only exposure to film studies, you will still be able to bring to bear what you learn here to any film you ever see in the future.

Most film textbooks are awash in titles, names, and dates, and Film Studies is in certain ways no different—except in degree. In order to illustrate various points with examples, Film Studies does refer to a number of real movies that were made by important filmmakers at specific times in the course of film history. But in my experience, introductory students, when faced with the title and even the briefest description of a film they have never seen (and most likely will never see), tend to tune out. As a result, I draw a number of examples in Film Studies from hypothetical films; I will ask you to use your own

14

imagination rather than draw impossibly on knowledge you don’t already have about films you haven’t seen. Moreover, each individual film class has its own screening list. Indeed, from a professor’s perspective, one of the great pleasures of teaching cinema studies classes lies in picking the films to show and discuss. Film Studies tries not to get in the way of individual professors’ tastes. In short, this book does not come with its own prearranged list of films you must see.

Some film studies textbooks contain hundreds of illustrations—film stills, drawings, graphs, and frame enlargements, many of which are in color. Film Studies does offer illustrations when necessary, but in order to keep the book affordable, they are not a prominent feature.

In fact, Film Studies tries to be as practical and useful as possible in many ways. It aims for the widest readership and is pitched accordingly. It draws most of its examples from American films because they are the films that most American students have seen in the past and are likely to see in the future. It is designed to accompany a wide spectrum of film courses but is focused most clearly on the type of mainstream “Introduction to Film” class that is taught in practically every college and university in the United States and Canada.

I hope it works for you.

15

INTRODUCTION REPRESENTATION AND REALITY

Consider the word REPRESENTATION (see glossary). What does it mean —and what technology does it take—to represent real people or physical objects on film? These are two of the basic questions in film studies. The dictionary defines the verb to represent as “to stand for; to symbolize; to indicate or communicate by signs or symbols.” That’s all well and good as far as it goes. But in the first one hundred years of motion pictures, the signs and symbols onscreen were almost always real before they ended up as signs and symbols on movie screens.

We take for granted certain things about painting and literature, chief among them that the objects and people depicted in paint or described in words do not necessarily have a physical reality. You can paint a picture of a woman without using a model or even without having a specific real woman in your mind. You can paint landscapes you’ve never actually seen, and in fact you don’t have to paint any real objects at all. Your painting can be entirely nonrepresentational—just splashes of color or streaks of black paint. And bear in mind that all works of art, in addition to being representations, are also real things themselves. The woman Leonardo da Vinci painted against a mysterious landscape may or may not have existed, but the painting commonly known as the Mona Lisa is certainly a real, material object.

In literature, too, writers describe cities that never existed and people who never lived. But on film—at least narrative films like the ones you’re going to learn about in this book—directors have to have something real to photograph. Now, with the increased use of digital and computer-generated imagery (CGI), of course, things are changing in that regard, but that’s a subject for a later chapter. For the time being, consider the fact that in classical world cinema, in all but a few very rare cases, directors had to have something real to photograph

16

with a film camera. A filmmaker could conceivably take a strip of CELLULOID—the plastic material that film is made of—and draw on it or paint it or dig scratches into its surface; experimental filmmakers have been known to use celluloid as a kind of canvas for nonrepresentational art. But otherwise a filmmaker must photograph real people and things. They may be actors wearing makeup and costumes, but they’re still real human beings. These actors may be walking through constructed sets, but these sets have a physical reality; walls that look like stone may actually be made of painted wood, but they are still real, material walls.

Even animated films are photographed: artists paint a series of ANIMATION CELS, and then each cel is photographed. The physical reality of The Hunchback of Notre Dame—the Walt Disney movie, not the Victor Hugo novel—is not the character of Quasimodo, nor is it an actor playing Quasimodo, but rather the elaborate, colorful, stylized drawings that had to be photographed, processed, and run through a projector to make them move. Those drawings have a physical reality, and Disney animators are masters at making them seem doubly real through shading, layering, and other means of creating a sense of depth.

Let’s approach this issue another way. If Picasso, Warhol, and Rembrandt each painted a portrait of the same person, most educated people would immediately understand that the result would be three very different-looking paintings. We recognize that a painting’s meaning is at least partly a matter of its FORM—the shape and structure of the art work. Even if three painters from the same general culture in the same general period painted the same person—say, Rembrandt, Hals, and Vermeer—we would see three different views of that person—three very different paintings.

The same holds true in literature. If, say, Ernest Hemingway, James Joyce, and Chuck Palahniuk all described the same person, we would end up reading three diverse pieces of prose. They’d all be written in English, and they’d be describing the same individual, but they simply wouldn’t read the same. Some details may be similar, but each writer would describe those details differently using different words and sentence structures. And because the form would be different in each

17

case, we would take away from the writing three different impressions —three ways of thinking and expressing and feeling.

But photography, particularly motion picture photography, appears on the surface to be of a different order. You take a real thing, and you photograph it. You take an event and you film it. And unless you monkey with the camera or the film processing and do all kinds of things to deliberately call attention to your presence as the filmmaker, if you and your friend and her friend and her friend’s friend all filmed the same event, you would all come up with similar looking films—or so you might assume.

This book will show you how and why each of your films would be different and why those differences matter to the art form. You will learn to see the ways in which filmmakers express ideas and emotions with their cameras.

For example, let’s say that three aspiring directors—all from the University of Pittsburgh—decide to film what they each consider to be a characteristic scene at a major league baseball game. The three film students head over to PNC Park with nothing but small video cameras, and they don’t leave their seats in the right field grandstand. Ethan is planning to use his footage in the romantic comedy he’s making based on a guy he knows who is obsessed with one of the Pirates’ infielders. Ethan’s friend, Shin Lae, is making a drama about a little boy with autism who loves baseball. And Shin Lae’s friend, Sanjana, hasn’t yet figured out what her story will be let alone how she will develop it, but she already knows that she wants to include random shots taken at a baseball game.

When the Pirates’ Luis Cruz hits a foul ball into the stands, the three filmmakers each have their cameras running. It’s the same moment in time, the same foul ball, but the three young directors see the event from three different perspectives. Ethan wants to show the whole action from beginning to end in a continuous shot, so just before the windup he frames the pitcher, batter, and catcher in the same image so he can show the ball move from mound to plate to air and, eventually, thanks to his ability to move the camera he’s holding, the ball finds its way into a fan’s bare fist. His built-in microphone picks up the sharp crack of the bat connecting with the ball, the crowd’s initial

18

roar of expectation, the collective groan of disappointment when the ball crosses the foul line, and finally a quick burst of distant applause at the fan’s catch.

Shin Lae, on the other hand, cares more about the boisterous reaction of a group of Cub Scouts nearby than she does about the particular batter or the events of the game itself. When she notices that the pitcher is about to wind up, she points her camera not at either the pitcher or the batter but instead at the scouts, who are a few rows above her. She simply records the boys’ shouts and facial expressions, which range from eagerness to glee to disappointment and finally to envy as the fan successfully nabs the ball. Sanjana, meanwhile, has not been paying attention to the baseball game at all. (She hates sports and has only agreed to come along in order to film things.) While Ethan and Shin Lae get increasingly wrapped up by the game, Sanjana has become fascinated by a group of shirtless, heavily tattooed, and increasingly drunk bikers to her right. She has already filmed a security guard telling them to quiet down, but something about this group of urban outlaws appeals to her—particularly the fattest, hairiest one. She aims her camera on him and him alone and just films him sitting there, yelling, drinking, swearing, and—eventually —raising his hairy paw and nabbing Luis Cruz’s foul ball to the cheers of the crowd. She tilts her camera up slightly to keep his head from leaving the image, and tilts it back down when the crowd stops cheering and the biker takes his seat again.

Same game + same scene + same action = three different films. Why? Because each filmmaker has made a series of choices, and each of those choices has artistic, expressive consequences.

This is one of the key aesthetic issues of cinema studies—learning to see that an apparently unmediated event is in fact a mediated work of art. At first glance, we tend not to see the mediation involved in the cinema; we don’t see the art. All we see—at first—is a representation of the physical reality of what has been photographed. And in a strange paradox, classical American filmmaking is saddled with the notion that it’s purely artificial. The lighting tends to be idealized, the actors’ faces are idealized by makeup, the settings are sometimes idealized. . . . Just to describe something as “a Hollywood vision of

19

life” is to say that it’s phony. The objects and people in Hollywood films are thus too real and too fake, all at the same time. How can we make sense of this?

20

CHAPTER 1 MISE-EN-SCENE: WITHIN THE IMAGE

WHAT IS MISE-EN-SCENE? Film studies deals with the problems of reality and representation by making an initial assumption and proceeding logically from it. This assumption is that all representations have meaning. The term MISE- EN-SCENE (also mise-en-scène) describes the primary feature of cinematic representation. Mise-en-scene is the first step in understanding how films produce and reflect meaning. It’s a term taken from the French, and it means that which has been put into the scene or put onstage. Everything—literally everything—in the filmed image is described by the term mise-en-scene: it’s the expressive totality of what you see in a single film image. Mise-en-scene consists of all of the elements placed in front of the camera to be photographed: settings, props, lighting, costumes, makeup, and figure behavior (meaning actors, their gestures, and their facial expressions). In addition, mise-en-scene includes the camera’s actions and angles and the cinematography, which simply means photography for motion pictures. Since everything in the filmed image comes under the heading of mise-en-scene, the term’s definition is a mouthful, so a shorter definition is this: Mise-en-scene is the totality of expressive content within the image. Film studies assumes that everything within the image has expressive meanings. By analyzing mise-en-scene, we begin to see what those meanings might be.

The term mise-en-scene was first used in the theater to describe the staging of an action. A theater director takes a script, written and printed on the page, and makes each scene come alive on a stage with a particular set of actors, a unique set design, a certain style of lighting, and so on. The script says that a scene is set in, say, a

21

suburban living room. Okay, you’re the director, and your task is to create a suburban living room scene on stage and make it work not as an interchangeable, indistinguishable suburban living room, but as the specific living room of the particular suburban characters the playwright has described on the page—characters you are trying to bring to life onstage. The same holds true in the cinema: the director starts from scratch and stages the scene for the camera, and every element of the resulting image has expressive meaning. Even when a film is shot on LOCATION—at a preexisting, real place—the director has chosen that location for its expressive value.

It’s important to note that mise-en-scene does not have anything to do with whether a given scene is “realistic” or not. As in the theater, film studies doesn’t judge mise-en-scene by how closely it mimics the world we live in. Just as a theater director might want to create a thoroughly warped suburban living room set with oversized furniture and distorted walls and bizarrely shaped doors in order to express her feeling that the characters who live in this house are crazy, so a film director creates mise-en-scene according to the impression he or she wishes to create. Sometimes mise-en-scene is relatively realistic looking, and sometimes it isn’t.

Here’s the first shot of a hypothetical film we’re making: we see a man standing up against a wall. The wall is made of . . . what? Wood? Concrete? Bricks? Let’s say bricks. Some of the bricks are chipped. The wall is . . . what color? White? No, let’s say it’s red. It’s a new wall. No, it’s an old wall, and some graffiti has been painted on it, but even the graffiti is old and faded. Is it indoors or outdoors? Day or night? We’ll go with outdoors in the afternoon. The man is . . . what? Short? No, he’s tall. And he’s wearing . . . what? A uniform—a blue uniform. With a badge.

Bear in mind, nothing has happened yet in our film—we just have a policeman standing against a wall. But the more mise-en-scene details we add, the more visual information we give to our audience, and the more precise our audience’s emotional response will be to the image we are showing them. But also bear in mind the difference between written prose and filmed image. As readers, you have just been presented with all of these details in verbal form, so necessarily

22

you’ve gotten the information sequentially. With a film image, we seem to see it all at once. Nothing is isolated the way things are in this written description. With film, we take in all the visual information quickly, and we do so without being aware that we’re taking it in. As it happens, studies of human perception have proven that we actually take in visual information sequentially as well, though a great deal more speedily than we do written information. Moreover, filmmakers find ways of directing our gaze to specific areas in the image by manipulating compositions, colors, areas of focus, and so on. By examining each of these aspects of cinema, film studies attempts to wake us up to what’s in front of us onscreen—to make us all more conscious of what we’re seeing and why.

To continue with our example of mise-en-scene: the man is handsome in a Brad Pitt sort of way. He’s a white guy. In his late thirties. But he’s got a black eye. And there’s a trace of blood on his lower lip.

So we’ve got a cop and a wall and some stage blood, and we film him with a motion picture or video camera. Nothing has happened by chance here; we, the filmmakers, have made a series of artistic decisions even before we have turned on the camera. Even if we happen to have just stumbled upon this good-looking cop with a black eye standing against a brick wall and bleeding from the mouth, it’s our decision not only to film him but to use that footage in our film. If we decide to use the footage, we have made an expressive statement with it. And we have done so with only one shot that’s maybe six seconds long. This is the power of mise-en-scene.

What’s our next shot? A body lying nearby? An empty street? Another cop? A giant slimy alien? All of these things are possible, and all of them are going to give our audience even more information about the first shot. Subsequent shots stand in relation to the first shot, and by the time you get to the tenth or twentieth or hundredth shot, the sheer amount of expressive information—the content of individual shots, and the relationships from shot to shot—is staggering. But we’re getting ahead of ourselves; this is the subject of chapter 4.

23

THE SHOT By the way: what is a SHOT? A shot is the basic element of filmmaking —a piece of film run through the camera, exposed, and developed; an uninterrupted run of the camera; or an uninterrupted image on film. That’s it: you turn the camera on, you let it run, you turn it off, and the result—provided that you have remembered to put film in the camera —is a shot. It’s an unedited shot, but it’s a shot nonetheless. It’s the basic building block of the movies.

Despite the use of the word scene in the term mise-en-scene, miseen-scene describes the content not only of a sequence of shots but of an individual shot. A shot is a unit of length or duration—a minimal unit of dramatic material; a scene is a longer unit usually consisting of several shots or more.

Even at the basic level of a single shot, mise-en-scene yields meaning. The first shot of an important character is itself important in this regard. Here’s an example: Imagine that you are going to film a murder movie, and you need to introduce your audience to a woman who is going to be killed later on in the film. What does the first shot of this woman look like? What does she look like? Because of the expressive importance of mise-en-scene, every detail matters. Every detail is a statement of meaning, whether you want it to be or not. (These are precisely the questions Alfred Hitchcock faced when he made his groundbreaking 1960 film, Psycho.) Is she pretty? What does that mean? What is she wearing? What does that mean? If she’s really attractive and wearing something skimpy—well, are you saying she deserves to be killed? What if she’s actually quite ugly—what are you saying there? Do you want your audience to like her or dislike her? It’s your choice—you’re the director. So what signals are you going to send to your audience to get that emotion across? Let’s say you’re going to put something on the wall behind her. And it’s . . . a big stuffed bird. No, it’s . . . a pair of Texas longhorns. No, it’s . . . a broken mirror. No, it’s a crucifix. Or maybe it’s just a big empty wall. Each of these props adds meaning to the shot, as does the absence of props and decorative elements.

24

This is why mise-en-scene is important: it tells us something above and beyond the event itself. Again: mise-en-scene is the totality of expressive content within the image. And every detail has a meaningful consequence.

Let’s say you’re filming a shot outdoors and a bird flies into the camera’s field of vision and out the other side. Suddenly, a completely accidental event is in your movie. Do you keep it? Do you use that shot, or do you film another one? Your film is going to be slightly different whichever TAKE you choose. (A take is a single recording of a shot. If the director doesn’t like something that occurs in Take 1, she may run the shot again by calling out “Take 2”—and again and again —“Take 22”—“Take 35”—“Take 59”—until she is ready to call “print!”) If you’re making the kind of film in which everything is formally strict and controlled, then you probably don’t want the bird. If however you’re trying to capture a kind of random and unpredictable quality, then your little bird accident is perfect. When film students discuss your work, they’ll be talking about the bird—the significance of random events of nature, perhaps even the symbolism of flight. That bird is now part of your film’s mise-en-scene, and it’s expressing something —whether you want it to or not. Whether critics or audiences at the multiplex specifically notice it or not, it’s there. It’s a part of the art work. It’s in the film, and therefore it has expressive meaning.

Here’s an example from a real film called Gentlemen Prefer Blondes, a 1953 musical comedy starring Marilyn Monroe and Jane Russell. There’s a scene in which Jane Russell performs a musical number with a crew of athletes on the American Olympic team. The number was supposed to end with a couple of the muscle boys diving over Jane Russell’s shoulders as she sits by the side of a swimming pool. As it turned out, however, one of the actors accidentally kicked her in the head as he attempted to dive over her into the pool. With the camera still running, the film’s glamorous star got knocked violently into the water and came up looking like the proverbial drowned rat. It was obviously an accident. But the director, Howard Hawks, decided to use that take instead of any of the accident-free retakes he and his choreographer subsequently filmed. Something about the accident appealed to Hawks’s sensibility: it expressed

25

something visually about sex and sex roles and gender and animosity and the failure of romance. There’s a sudden and shocking shift in mise-en-scene, as Jane Russell goes from being the classically made- up Hollywood movie star in a carefully composed shot to being dunked in a pool and coming up sputtering for air, her hair all matted down, and improvising the end of the song. Hawks liked that version better; it said what he wanted to say, even though it happened entirely by chance. The shot, initially a mistake, took on expressive meaning through its inclusion in the film. SUBJECT-CAMERA DISTANCE—WHY IT MATTERS At the end of Billy Wilder’s Sunset Boulevard (1950), an aging star turns to her director and utters the famous line, “I’m ready for my close-up.” But what exactly is a close-up? Or a long shot? And why do these terms matter?

One way directors have of providing expressive shading to each shot they film is to vary the distance between the camera and the subject being filmed. Every rule has its exceptions, of course, but in general, the closer the camera is to the subject, the more emotional weight the subject gains. (To be more precise, it’s really a matter of how close the camera’s lens makes the subject seem to be; this is because a camera’s lens may bring the subject closer optically even when the camera is physically far away from the subject. See the glossary’s definition of TELEPHOTO LENS for clarification.) If we see an empty living room and hear the sound of a telephone ringing on the soundtrack but we can’t immediately find the telephone onscreen, the call may seem relatively unimportant. But if the director quickly cuts to a CLOSE-UP of the telephone, suddenly the phone call assumes great significance. Because the director has moved the camera close to it, the phone—once lost in the living room set—becomes not only isolated within the room but enormous on the screen.

A close-up is a shot that isolates an object in the image, making it appear relatively large. A close-up of a human being is generally of that person’s face. An extreme close-up might be of the person’s

26

eyes—or mouth—or nose—or any element isolated at very close range in the image.

Other subject-camera-distance terms are also simple and self- explanatory. A MEDIUM SHOT appears to be taken from a medium distance; in terms of the human body, it’s from the waist up. A THREEQUARTER SHOT takes in the human body from just below the knees; a FULL SHOT is of the entire human body. A LONG SHOT appears to be taken from a long distance. Remember: lenses are able to create the illusion of distance or closeness. A director could conceivably usea telephoto lens on a camera that is rather distant from the subject and still create a close-up. The actual physical position of the camera at the time of the filming isn’t the issue—it’s what the image looks like onscreen that matters. The critical task is not to try to determine where the camera was actually placed during filming, or whether a telephoto lens was used to create the shot, but rather to begin to notice the expressive results of subject-camera distance onscreen.

There are gradations. You can have medium close-ups, taken from the chest up; extreme long shots, which show the object or person at a vast distance surrounded by a great amount of the surrounding space. If, at the end of a western, the final shot of the film is an extreme long shot of an outlaw riding off alone into the desert, the director may be using the shot to convey the character’s isolation from civilization, his solitude; we would see him in the far distance surrounded by miles of empty desert. Imagine how different we would feel about this character if, instead of seeing him in extreme long shot, we saw his weather-beaten face in close-up as the final image of the film. We would be emotionally as well as physically closer to him at that moment because we would be able to read into his face the emotions he was feeling. His subtlest expressions—a slightly raised eyebrow, a tensing of the mouth—would fill the screen.

27

FIGURE 1.1 Extreme close-up: a single eye dominates the image.

FIGURE 1.2 Close-up: the character’s face fills most of the screen.

FIGURE 1.3 Medium shot: the character appears from the waist up.

FIGURE 1.4 Long shot: because the camera has moved back even further, the character now appears in a complete spatial context.

28

FIGURE 1.5 Extreme long shot: the camera is now very far away from the character, thereby dwarfing him onscreen. What are the emotionally expressive qualities of each of these illustrations (figs. 1.1 through 1.5)?

Here’s a final observation on subject-camera distance: Each film

establishes its own shot scale, just as each filmmaker establishes his or her own style. Whereas Orson Welles in Citizen Kane (1941) employs an extreme close-up of Kane’s lips as he says the key word, “Rosebud,” Howard Hawks would never push his camera so close to a character’s mouth and isolate it in that way. The Danish director Carl Theodor Dreyer shot his masterpiece The Passion of Joan of Arc (1928) almost entirely in closeups; as a result, what would be a long shot for Dreyer might be a medium shot for John Ford or Billy Wilder. If we begin with the idea that the human body is generally the measure for subject-camera distance, then the concept’s relativity becomes clear: a close-up is only a close-up in relation to something else—the whole body, for example. The same holds true for objects and landscape elements. In short, we must appreciate the fact that subject-camera distances are relative both within individual films—the sequence in Citizen Kane that includes the extreme close-up of Kane uttering “Rosebud” begins with an equally extreme long shot of his mansion—and from film to film: Dreyer’s close-ups differ in scale from those used by Ford or Wilder. CAMERA ANGLE In addition to subject-camera distance, directors employ different camera angles to provide expressive content to the subjects they film. When directors simply want to film a person or room or landscape

29

from an angle that seems unobtrusive and normal (whatever the word normal actually means), they place the camera at the level of an adult’s eyes, which is to say five or six feet off the ground when the characters are standing, lower when they are seated. This, not surprisingly, is called an EYE-LEVEL SHOT.

When the director shoots his or her subjects from below, the result is a LOW-ANGLE SHOT; with a low-angle shot, the camera is in effect looking up at the subject. And when he or she shoots the subject from above, the result is a HIGH-ANGLE SHOT; the camera is looking down. An extreme overhead shot, taken seemingly from the sky or ceiling and looking straight down on the subject, is known as a BIRD’S-EYE VIEW.

The terms close-up, low-angle shot, extreme long shot, and others assume that the camera is facing the subject squarely, and for the most part shots in feature films are indeed taken straight-on. But a camera can tilt laterally on its axis, too. When the camera tilts horizontally and/or vertically it’s called a DUTCH TILT or a canted angle.

FIGURE 1.6 Eye-level shot: the camera places us at the character’s height—we’re equals.

30

FIGURE 1.7 Low-angle shot: we’re looking up at her; low-angle shots sometimes aggrandize the shot’s subject.

FIGURE 1.8 High-angle shot: we’re looking down at her now; this type of shot may suggest a certain superiority over a character.

FIGURE 1.9 Bird’s-eye shot: this shot is taken from the highest possible angle. What might be the expressive consequences of this shot?

31

FIGURE 1.10 Dutch tilt (or canted angle) shot: the camera is not on its normal horizontal or vertical axes, and the resulting image is off-kilter; Dutch tilts are sometimes used to suggest a character’s unbalanced mental state.

Of everything you read in this book, the opposite also may be true

at times, since every attempt to define a phenomenon necessarily reduces it by ignoring some of the quirks that make films continually interesting. There’s a fine line to tread between providing a useful basic definition that you want and need and alerting you to complications or outright contradictions that qualify the definition. This is certainly true with any discussion of the expressive tendencies of low-angle and high-angle shots. Typically, directors use low-angle shots to aggrandize their subjects. After all, “to look up to someone” means that you admire that person. And high-angle shots, because they look down on the subject, are often used

to subtly criticize the subject by making him or her seem slightly diminished, or to distance an audience emotionally from the character. At times, a camera angle can in fact distort the object onscreen. By foreshortening an object, for example, a very high angle shot does make an object or person appear smaller, while a very low angle can do the opposite. But these are just broad tendencies, and as always, the effect of a particular camera angle depends on the context in which it appears. Film scholars can point to hundreds of examples in classical cinema in which a high- or low-angle shot produces an unexpected effect. In Citizen Kane, for instance, Welles chooses to film his central character in a low-angle shot at precisely the moment of his greatest humiliation, and a technical device that is often employed to signal admiration achieves exactly the opposite effect by

32

making Kane look clumsy and too big for his surroundings, and therefore more pitiable and pathetic.

FIGURE 1.11 Two-shot: the definition is self-explanatory, but note the equalizing quality of this type of shot; these two characters have the same visual weight in a single shot.

FIGURE 1.12 Three-shot: the two-shot’s socially balanced quality expands to include a third person, but note the greater subject-camera distance that goes along with it in this example.

33

FIGURE 1.13 Master shot: the whole set—in this case, a dining room—and all the characters are taken in by this type of shot.

Shots can also be defined by the number of people in the image.

Were a director to call for a close-up of his protagonist, the assumption would be that a single face would dominate the screen. When a director sets up a TWO-SHOT, he or she creates a shot in which two people appear, generally in medium distance or closer, though of course there can be two-shots of a couple or other type of pair walking that would reveal more of their lower bodies. The point is that two- shots are dominated spatially by two people, making them ideal for conversations.

A THREE-SHOT, of course, contains three people—not three people surrounded by a crowd, but three people who are framed in such a way as to constitute a distinct group.

Finally, a MASTER SHOT is a shot taken from a long distance that includes as much of the set or location as possible as well all the characters in the scene. For example, a scene set in a dining room could be filmed in master shot if the camera was placed so that it captured the whole dining table, at least two of the four walls, all of the

34

people sitting around the table, and maybe the bottom of a chandelier hanging over the table. The director could run the entire scene from beginning to end and, later, intercut close-ups, two-shots, and three- shots for visual variation and dramatic emphasis. SPACE AND TIME ON FILM Like dance and theater, film is an art of both space and time. Choreographers move their dancers around a stage for a given amount of time, and so do theater directors with their actors. But a dance can run slower or faster some nights, especially if it isn’t connected to a piece of music. And if the actors in a play skip some of their lines or even talk faster than usual in a given performance, the play can run shorter some nights than others.

But a 110-minute film will be a 110-minute film every time it is screened, whether on the silver screen at a multiplex or on a standardspeed DVD player in your living room. This is because sound film runs at a standard 24 frames per second, and it does so not only through the camera when each shot is individually filmed but also through the projector when it is played in a theater. In the early days of cinema, camera operators cranked the film through their cameras by hand at a speed hovering as close as possible between 16 and 18 frames per second. If camera operators wanted to speed actions up onscreen, they would undercrank, or crank slower: fewer frames would be filmed per second, so when that footage was run through a standard projector at a standard speed, the action would appear to speed up. If they wanted to create a slow-motion effect, they would do the opposite: they would OVERCRANK, or crank faster, causing the projector to slow the movement down when the shot was projected. In short, undercranking produces fast motion, while overcranking produces slow motion.

The introduction of SYNCHRONIZED SOUND FILM—characters being seen and heard speaking at the same time onscreen—in the late 1920s meant that the IMAGE TRACKS and the SOUNDTRACKS had to be both recorded and projected at the same speed so as to avoid

35

distortion. 24 frames per second was the standard speed that the industry chose. You’ll learn more about sound technology in chapter 5. And although videotape—unlike film’s celluloid—is not divided into individual frames, the same principle applies: video’s electromagnetic tape is recorded at the same speed at which it is transmitted and screened. A 60-minute video will always run 60 minutes—no more, no less.

There is a philosophical point to film’s technical apprehension of time. Unlike any other art form, motion pictures capture a seemingly exact sense of real time passing. As the great Hollywood actor James Stewart once described it, motion pictures are like “pieces of time.” Then again, a distinction must be made between real time, the kind measured by clocks, and reel time—the pieces of time that, for example, Spike Lee manipulated by editing to create Malcolm X, a film that covers the central events of a 39-year-old man’s life in 202 minutes.

One familiar complication, of course, is that when films are shown on television they are often LEXICONNED to fit them into a time slot and squeeze in more commercials. Lexiconning involves speeding up the standard 24 frames per second by a matter of hundredths of a frame per second, which may shorten the film as much by as 6 or 7 percent of its total running time. Also note the familiar warning that accompanies movies on TV: “Viewer discretion is advised. The following film has been modified from its original version. It has been formatted to fit this screen and edited to run in the time slot allotted and for content.” People who love films hate this Procrustean process. (Procrustes was a mythical king who had a bed to which he strapped and tortured his victims. Those who were too short for the bed were stretched to fit it, and those too tall had their heads and legs chopped off.) Would an art gallery trim the top, bottom, and sides of a painting just so it would fit into a preexisting frame?

36

FIGURE 1.14 A strip of celluloid, divided by frames, with the soundtrack running vertically down the left alongside the image frames.

COMPOSITION One confusing aspect of film studies terminology is that the word FRAME has two distinct meanings. The first, described above, refers to each individual rectangle on which a single image is photographed as the strip of celluloid runs through a projector. That’s what we’re talking about when we say that film is recorded and projected at 24 frames per second: 24 of those little rectangles are first filled with photographic images when they are exposed to light through a lens, and then these frames are projected at the same speed onto a screen.

But the word frame also describes the borders of the image onscreen—the rectangular frame of darkness on the screen that defines the edge of the image the way a picture frame defines a framed painting or photograph. Sometimes, in theaters, the screen’s frame will be further defined by curtains or other masking. Your television set’s frame is the metal or plastic edge that surrounds the glass screen. In fact, you can make threequarters of a frame as you sit reading this book simply by holding your hands in front of you, palms out, and bringing your thumbs together. The top of this handmade frame is open, but you can get a good sense of why the frame is an important artistic concept in the cinema just by looking around your room and framing various objects or even yourself in a mirror.

37

Note that your literally handmade frame is more or less a square if you keep your thumbs together. Now create a wider rectangle by touching your right forefinger to your left thumb and vice versa. See how this framing changes the way the room looks. And be aware of the subjectcamera distance and camera angle of the imaginary shots you create. Ask yourself why certain “shots” look better than others. Do you find that you have a taste for oblique angle close-ups, for example, or do you see the world more at eye level?

The precise arrangements of objects and characters within the frame—the picture-frame kind of frame—is called COMPOSITION. Each time you moved your handmade frame, you created a new composition, even if you didn’t move any objects around on your desk or ask your roommate to move further away.

As in painting, composition is a crucial element of filmmaking. In fact, composition is a painterly term. (Few if any art critics ever refer to the mise-en-scene of a painting.) Composition means the relationship of lines, volumes, masses, and shapes at a single instant in a representation. Composition is relatively static, though few elements remain truly motionless in a motion picture; mise-en-scene is more dynamic. Miseen-scene is the relation of everything in the shot to everything else in the shot over the course of the shot, though sometimes film critics can extend their discussion of compositional consistency to individual spaces represented in the film and even over the course of the entire film. One could write a great paper on any of these diverse mise-en-scenes, paying particular attention to their compositional elements: the courtyard in Hitchcock’s Rear Window (1954), the bar’s basement in David Fincher’s Fight Club (1999), Rick’s Café in Michael Curtiz’s Casablanca (1942), or Sal’s Pizzeria in Spike Lee’s Do the Right Thing (1989).

Like a painter, a director’s particular arrangement of shapes, masses, vectors, characters’ bodies, textures, lighting, and so on within each film image is one of the cornerstones of his or her cinematic style. Think again of the bird that flew into the hypothetical shot described above. That example was not only an instance of meaning being produced unintentionally; it was an instance of compositional change as well. Here’s a related example: If a director

38

had taken several hours to set up a landscape shot with an eye toward a strict, static composition—a western butte on the left seen at sunset with a flock of sheep standing more or less still at closer distance on the right, and a ranch hand on a horse in near distance at more or less precisely the center of the image—and suddenly one of the sheep bolted away from the herd and went running across the camera’s field of vision, that director may insist on a RETAKE with the errant sheep safely put away in a faraway pen. Why? Because he considered his composition ruined. Then again, another director might use the take with the running sheep because she might see its sudden, rapid, lateral movement across the screen as a beneficial if accidental addition to her composition.

Adding to the problems of cinematic composition is the fact that motion pictures are (clearly) all about motion, so to a certain extent almost every composition is fluid: people move, the wind blows things around, cars speed by, and the camera itself may move. Moreover, as you will learn in chapter 4, shots are connected to other shots in a process called EDITING, and the composition of one shot ought to have something to do not only with the shot that follows it but with the shot that precedes it.

One final concept in this introductory chapter: the shape of the image. Conceivably, movies could be round, couldn’t they? Indeed, Thomas Edison’s first films were round. Obviously films aren’t round anymore.1 They take the form of rectangles of various widths. The term ASPECT ratio describes the precise relation of the width of the rectangular image to its height. Historically, aspect ratios are problematic. The silent aspect ratio was actually 1.33:1, a slightly wide rectangle, the width of the image being one and one third the size of its height. Making matters more confusing, the film industry standard —the so-called ACADEMY RATIO (named after the Academy of Motion Picture Arts and Sciences, the group that gives the Oscars, and that instituted the standard ratio in 1932)—is often referred to as being 1.33:1, but in actual fact the Academy ratio is 1.37:1—a very slightly wider rectangle than that of silent films.

All Hollywood films after 1932 were made with the standard

39

Academy aspect ratio of 1.37:1—that is, until the 1950s, when various widescreen technologies were developed as a way of competing with television. But again, that’s the subject of a later chapter.

STUDY GUIDE: ANALYZING THE SHOT

You will learn through the course of reading this book that film is a complicated art form with many technical and expressive aspects, and one of the key problems in analyzing motion pictures is that that their images are in fact in motion. So to simplify things here at the beginning of the course, try the following exercise:

Get a videotape or DVD of a feature film from any period in film history. In fact, if possible, get one you’ve already seen and enjoyed. Fast forward to any point you choose, and then freeze-frame the image.

You are now looking at a single frame of a single shot. What do you notice about its mise-en-scene? Properly speaking, since this is a static image, a single frame, you are being asked to notice elements of its composition rather than the totality of expressive content in an entire shot. Remember what mise-en-scene means: all of the elements placed in front of the camera to be photographed: settings, props, lighting, costumes, makeup, and figure behavior (meaning actors, their gestures, and their facial expressions). And composition: the relationship of lines, volumes, masses, and shapes at a single instant. Composition is relatively static; mise-en-scene is dynamic.

Your assignment is to notice the various compositional elements in the image. Write them down in the form of a list, and be as descriptive as possible. (Instead of saying simply “Julia Roberts,” for instance, describe in detail what Julia Roberts looks like—the color of her hair, the color and style of her costume, and so on.) Describe the room or the landscape in terms of its colors. How well lit is the room or outdoors space? Is it day, night, dusk, or dawn? What kind of furniture is in the room, or what landscape elements are in the image?

Is the shot taken at eye level or low angle? Is it a close-up or a long shot? Is there anything you notice about the composition?

Put all of your observations into words, and be as clear as possible. Here is an example, drawn from Fight Club (David Fincher, 1999—Chapter 9,

minute 21:54):

Close-up, eye-level Man, about 30 years old, blandly handsome Dark hair Top of gray suit jacket White collar of dress shirt Man is centered on the screen Top part of head cut out of image Airplane interior Blue seat with white headrest Man in focus; background out of focus Blue curtains center-left of image in background

40

Bright curved windows on right in background

FIGURE 1.15 The unnamed narrator (Edward Norton) of Fight Club (1999) (frame enlargement).

Aspect ratio—wide rectangle Man in row behind, out of focus—no other people Light on forehead and nose of man in close-up Eyes in shadow Dark circles under eyes He is staring straight ahead

WRITING ABOUT THE IMAGE The first step in writing about film is to translate the content of film images into words using the new technical vocabulary you are learning. So your first writing assignment is a simple one: take the detailed description of the shot you created above and turn it into a coherent paragraph. Don’t worry about forming a thesis statement or making any sort of argument. Forget about assigning meanings to what you see onscreen or discussing the symbolism of anything. Concentrate instead on creating a single paragraph of prose that succeeds in translating an image into words. Spell-check your work when you are finished. If your word-processing application’s dictionary does not contain some of the technical terms you have used, add them (after consulting the glossary at the back of this book to make sure you have spelled them correctly to begin with).

Here’s an example using the above list of compositional elements from the Fight Club image:

The image is a close-up of a blandly handsome man who appears to be about thirty years old. He has dark hair with a conservative, businessman-type haircut. We can see the shoulders of his gray, conservative suit jacket and the white collar of his dress shirt. The man is centered on the screen; the very top part of his head is cut out by the frame. The image shows the interior of an airplane.

41

The man is seated on a blue seat with his face framed by a white strip of material that serves as a headrest. The man is in crisp focus, but the background is out of focus. Still, we can clearly see some blue curtains in the center-left of the image, with some bright curved airplane windows on the far right in the background. The curtains match the blue of the seat; the windows, appearing white, match the headrest. The aspect ratio is that of a fairly wide rectangle. There is another man in the image—he is seated in the row behind the man in close-up—but he is the only other person in the image. The man in close-up has a bright light shining on his forehead and nose, but his eyes are notably in shadow, although we can clearly see dark circles under his eyes, indicating tiredness and a lack of sleep. The man is staring straight ahead.

1 One exception is the IMAX Dome or OMNIMAX system, which projects a rounded (but not circular) image on a tilted dome.

42

CHAPTER 2 MISE-EN-SCENE: CAMERA MOVEMENT

MOBILE FRAMING Motion pictures share a number of formal elements with other arts. The shape of a particular painting is essentially its aspect ratio—the ratio of width to height of the image—and the composition and lighting effects created by the painter play a central role in that painting’s meaning, as does the distance between the artist and his or her subject. (A portrait might be the equivalent of a close-up; a landscape is usually a long shot or an extreme long shot.) The term mise-en- scene is derived from the theater: the arrangement and appearance of a play’s sets and props, its characters’ gestures and dialogue and costumes, the STORY and PLOT—all come together toward an expressive goal, just as in motion pictures. Novels, too, have stories and plots that can (and should) be analyzed for meaning.

Film offers something unique: MOBILE FRAMING. In the first chapter of Film Studies, we made an assumption that turns out to be false: that the camera is static. All the definitions and examples implied that characters and objects move within the frame, but the framing stays the same within each shot. In fact, this is not the case at all. The camera can move from side to side, up and down, backward and forward, all of the above, and more. Editing from shot to shot or scene to scene changes the position of the spectator from shot to shot or scene to scene, but camera movement shifts the spectator’s position within the shot.

No other art form is able to accomplish this feat. In painting, Cubism plays with the idea of expressing multiple perspectives of a single subject, but Cubist paintings inevitably and necessarily have an immobile frame owing to the nature of painting as an art form.

43

Similarly, one can walk around a sculpture, but the sculpture remains on its pedestal. A particularly dynamic sculpture may suggest movement, and in fact some sculptures have motors that make parts of them move, but they still remain essentially in place. A rotating stage may shift from one scene to another in the theater, but the audience does not itself experience the sensation of movement.

Film and video are different. Films offer shifting positions and perspectives. Shots aren’t limited in terms of subject-camera distance or angle of view. A single shot may begin from a position so high off the ground no human being could achieve it unaided by a machine or a structure and proceed to lower itself to the level of a person, travel on the ground for a while, look around, follow a certain character, change direction and follow another character for a while, or maybe follow no particular character at all and go out on its own, thereby revealing a sense of spatial coherence and expressive fluidity that no static shot could ever achieve. Camera movement is an especially significant aspect of mise-en-scene. TYPES OF CAMERA MOVEMENT How does film studies describe various kinds of camera movements? First, when the camera itself is stationary but pivots on its axis from side to side, it’s called a PAN. If the camera is stationary but tilts up and down, it’s called a tilt (or a VERTICAL PAN). Both of these camera movements are like moving your head but not your body: you can take in a whole panorama without taking a single step simply by turning your head from side to side (a pan) or nodding up and down (a tilt). By panning and tilting, the camera reveals more space without itself moving from its fixed position on the ground—which is to say on a tripod or other supporting device. You can create the effect of a pan and a tilt right now simply by moving your head.

As you can see, you can take in large expanses of the room you’re in without getting up from your seat. But you’re still grounded; you’re stuck in the same place. But just as you can get up and walk around, the camera itself can move. Camera movement is one of the most

44

beautiful and yet underappreciated effects in any art form. However much we take it for granted, movement through space on film can be extraordinarily graceful. And by its movement alone, a camera reveals much more than simply the space through which it moves. It can express emotions.

The simplest way of moving a camera is to place it on a moving object, such as a car or a train or a ship. That’s called a MOVING SHOT. The camera can also be placed on its own mobile device. When the camera moves parallel to the ground, it’s called a TRACKING SHOT or a DOLLY. If it moves up and down through space it’s called a CRANE. For a crane shot, the camera is mounted on a kind of cherry-picker, which enables it to rise very high up in the air—to ascend from ground level into the sky or descend from the sky to ground level.

With both of these devices, tracking shots and cranes, the camera moves physically through space. In classical Hollywood filmmaking, crews used to mount actual tracks on the ceiling or the floor, thus ensuring that the camera would move in a very smooth and precise fashion (hence the term tracking shot). Actors being filmed in tracking shots would therefore sometimes have to play their scenes squarely on the tracks, and when they walked they had to make sure to lift their legs high enough to clear the railroad ties that held the tracks in place. More often, cameras were—and still are—mounted on wheels, or dollies, thus enabling them to move freely in a variety of directions: forward and backward, sideward, diagonally, or around in a circle.

In the 1960s, technology developed to the point at which the size and weight of a motion picture camera, which had formerly been large and cumbersome, was reduced so much that a camera operator could actually carry the camera while filming. These are called hand-held cameras, which create HAND-HELD SHOTS. In any number of ’60s (and later) films, directors used hand-held shots as a convention of realism —the jerkiness of hand-held shots seemed to suggest an unmediated reality, a lack of intervention between camera and subject. Audiences still tend to read hand-held shots that way: witness The Blair Witch Project (1999), which depends on the shakiness of the camera work to convey the homemade quality of the filmmaker-characters’ attempt to document the supernatural. In fact, of course, a hand-held shot isn’t

45

any more “realistic” than any other kind of shot. It is a stylistic convention—a visual sign that people still read as expressing heightened realism.

In a still later development, cameras are now able to be mounted on an apparatus called a STEADICAM, which fits onto a camera operator’s body (via a vest) in such a way that when he or she walks, the effect is that of very smooth movement, as opposed to a hand-held camera that records every bump in every step.

Finally, there’s a kind of fake movement, an impression of movement that isn’t really the result of a moving camera but rather of a particular kind of lens. That’s called a ZOOM. With a zoom, the camera operator creates the impression of movement by shifting the focal length of the lens from wide angle to telephoto or from telephoto to wide angle, but the camera itself does not move. Zoom lenses are also known as varifocal lenses. A zoom is therefore a kind of artificial movement. There is no real movement with a zoom, just an enlargement or magnification of the image as the lens shifts from wide-angle to telephoto or the opposite, a demagnification, as it shifts from telephoto to wide-angle.

In other words, a zoom has two extremes—telephoto and wide angle. The telephoto range tends to make space seem flatter, while the wideangle range (like any WIDE-ANGLE LENS) enhances the sense of depth.

Please note: when you say or write “zoom,” you should specifically mean “zoom.” Be careful not to describe a shot by saying or writing “the director zooms forward” unless you are convinced that the director actually used a zoom lens to achieve the impression of camera movement. Granted, it can be difficult for beginners to appreciate the difference in appearance between a tracking shot and a zoom. One way of differentiating between the two is that a forward tracking shot actually penetrates space whereas a zoom forward (or zoom in) has a certain flatness to it—an increasing lack of depth owing to the shift from the wide-angle range to the telephoto range.

One way of understanding the difference in visual effect between a tracking shot and a zoom is to realize that film creates the illusion of a three-dimensional world—height, width, and depth—on a

46

twodimensional screen. We’re usually fooled into perceiving depth where there is none. A forward tracking shot enhances this illusion of depth; the camera passes through space as it moves forward, and the resulting image re-creates that spatial penetration. A forward zoom, in contrast, does nothing to alleviate the screen’s actual flatness. The camera doesn’t move with a forward zoom, so we perceive the resulting image as being seemingly flatter than usual. In fact, the image is always flat. Forward zooms just do nothing to make us think it isn’t. (In a zoom out—a zoom that begins in the telephoto range and ends in wide-angle—the flatness of the telephoto gives way to the sense of depth created by the wide-angle.)

Finally, filmmakers and film scholars alike make a distinction between MOTIVATED AND UNMOTIVATED CAMERA MOVEMENTS. It’s the film’s characters who determine whether the movement is motivated or not. For example, if a character begins to walk to the left and the camera tracks with her, the camera movement is considered to be motivated. If the character stands perfectly still but the camera tracks forward toward her, it’s unmotivated. This is a useful distinction to the extent that it defines the characters’ world as being separate and distinct from the filmmaker’s commentary on that world. Motivated camera movements are those that are prompted by the characters and events in the film; unmotivated camera movements are those that pertain to the filmmaker’s commentary on characters and events. At the same time, the term unmotivated is a poor choice of words to describe a filmmaker’s expressive, artistic choices. There’s a motive there, after all. It’s just that of the director, not that of a character. EDITING WITHIN THE SHOT No matter whether a given camera movement is called motivated or unmotivated, all camera movement, like all editing, is a matter of human decision-making. In fact, an extended camera movement may function in much the same way as editing. They are each a way of selecting, arranging, and presenting information in a sequential manner to the audience.

47

Imagine a film that begins with a crane shot of a movie marquee that contains the name of the film’s location. Let’s say it’s the Reseda Theater in Reseda, California. Without cutting, the camera pans left and cranes down to street level just as a large car pulls up at a nightclub across the street; using a Steadicam, the camera operator continues the shot by following the driver of the car and his girlfriend as they get out of the car and are greeted by the nightclub manager, who follows them inside the nightclub, where—still in the same shot— the man and woman are led to a booth, where they sit and place an order for drinks. The shot continues even further as the camera operator follows the nightclub manager as he says hello to a club-goer wearing an out-of-style western shirt, then returns to the couple at the booth just as a woman on roller skates appears and engages them in a brief conversation. This lengthy camera movement is neither solely unmotivated nor solely motivated; it contains elements of both.

Film buffs will recognize this as the opening shot of Paul Thomas Anderson’s Boogie Nights (1997). But even if you have not seen the film, you can appreciate the degree of planning and skill required in creating a shot of this extraordinary duration. A single actor flubbing a line or sneezing would have ruined the take, as would an EXTRA—an actor who has no lines in a crowd scene—bumping into the Steadicam operator. Notice also the amount of selection involved in executing the shot. First we see the marquee; then we see the car; then we see the couple in the car; then we see the nightclub manager . . . It’s a kind of editing within the shot—an arrangement and sequential presentation of discrete pieces of information within a single shot.

The first shot of Boogie Nights—and any such shot—is called a LONG TAKE, meaning that the shot continues without a cut for an unusually long time. The director of Boogie Nights could easily have carved up his opening sequence into individual shots—of the marquee, the car, the driver, his girlfriend, the nightclub manager, and so on—but he chose to unify both space and time by filming it in one continuous take—a long take. (If you have seen the film, or when you see the film, ask yourself what Anderson’s long take expresses in terms of the overall theme of his movie.) This particular long take lasts for almost three minutes. Another famous long take, the opening shot

48

of Orson Welles’s Touch of Evil (1958), lasts for about four minutes. But it’s important to note that long takes are like subject-camera distances in that they are defined relatively, so in an otherwise highly cut movie a shot lasting thirty or forty seconds could be considered a long take in the context of that particular film.

A single shot may serve, somewhat paradoxically, as its own sequence or scene; the term for this is a sequence shot. The opening of Hawks’s Scarface (1932) is a classic example of a sequence shot; the shot chronicles the last minutes in the life of a mobster. Hawks begins with a low-angle shot of a streetlamp atop a street sign; the names of the streets are set perpendicular to each other to form the first of the many X shapes that appear throughout the film whenever anybody is about to get rubbed out. The camera tracks back as the light dims and goes out, tilts down, and pans right past a milk delivery man to reveal a man with an apron coming out of a private club doorway and yawning and stretching. The camera then tracks laterally right—seemingly through the exterior wall of the club—through the lobby, and into the ballroom, where the aproned man begins to clean up after what has evidently been a wild party. He removes streamers from the many potted palms that define the foreground as the camera continues to track and pan right. The man stops sweeping for a moment, and as the camera tracks forward, he reaches down and pulls a white brassiere out of the pile of streamers that litters the floor. (It certainly was a wild party.) As the man examines the bra, Hawks continues to track right and forward to reveal three men seated at a table set amid streamers hanging from the rafters; the man in the middle, a portly fellow called Big Louie, is wearing a paper party crown. The camera remains stationary as the three men converse for a few moments, after which the men get up from the table; Hawks tracks left with them as they move in that direction. After the two other men depart offscreen left, the camera remains on Big Louie, who walks toward the right; the camera tracks with him as he moves through the ballroom and into a telephone booth. He begins to place his call, and the camera stays motionless for a few seconds before tracking forward and then to the right to reveal the ominous silhouette of a man, who strolls in the direction of Big Louie; the camera tracks

49

left with him as he walks; he is calmly whistling an opera aria. The camera stops on a frosted glass partition; the shadow of the man is framed by the partition as the man reaches into his pocket and pulls out a gun. “Hello, Louie,” the man remarks before firing three times. Still in shadow, he wipes the gun off with a handkerchief and throws it on the floor. The man turns and leaves as Hawks tracks and pans left to reveal the body of Big Louie on the floor. The shot is still not over: the aproned man from the beginning of the sequence shot enters the image from the left. He stares at Louie’s dead body, removes his apron, throws it in a closet, dons his hat and jacket, and runs left toward the door as Hawks completes the shot by tracking and panning left and slightly forward before ending with a DISSOLVE to the next scene. The shot is three and a half minutes long.

What makes this a sequence shot is that the single shot comprises the entire scene. The next shot takes place in an entirely different setting—that of a newspaper office, where editors debate the content of the headline announcing the killing. The lengthy opening shot of Boogie Nights, in contrast, does not contain the entire scene, which continues with more shots of the nightclub interior. SPACE AND MOVEMENT We are accustomed to thinking only about the content of each film image we see—the material actually onscreen. But if mise-en-scene, editing, and camera movement are all matters of decision-making, of selection, then it stands to reason that the information a director leaves out of the image is worth considering as well.

The film theorist Noël Burch has defined six zones of offscreen space:

1. offscreen right 2. offscreen left 3. offscreen top 4. offscreen bottom 5. behind the set

50

6. behind the camera

Imagine a medium shot of a woman, an aging actress, seated at a banquet table. We see her face and upper body; we see part of the table in front of her; we see an empty glass on the table. She reaches for something offscreen right, and when she brings her hand back into the image she is grasping a liquor bottle. She pours a few slugs of booze into the empty glass. Then, a hand enters the image (also from offscreen right); in the hand is a bottle of water. The actress bats the hand away before the otherwise unseen tablemate gets the opportunity to pour any water into the actress’s liquor glass. The actress is casually but clearly refusing to have her drink watered down, and this action—together with the subtle smirk on the actress’s face—establishes her character with great expressive efficiency.

This shot—which introduces Bette Davis’s character in Joseph Mankiewicz’s All About Eve (1950)—emphasizes the first of Burch’s offscreen spaces: offscreen right. Although the director has framed the film’s star in such a way as to emphasize her presence (he might have chosen instead to begin with a long shot of Davis seated at the same table surrounded by many other people and therefore not featured onscreen as an individual), he nevertheless indicates that someone else is sitting next to her. We naturally understand that the hand isn’t disembodied. We assume that the space of the action continues beyond the frame—that there is a whole person there.

Audiences make similar assumptions about the other three spaces that border the image—offscreen left, offscreen top, and offscreen bottom. The director does not need to show these spaces to us directly for us to assume that they exist. And these four offscreen spaces are inevitably diegetic; in other words, they pertain to the world of the film’s story. (See chapter 6 for a more complete discussion of the concept of diegesis.)

The other two offscreen spaces are important to consider, if only briefly, for the theoretical questions they raise. It’s rare in narrative cinema for a director to move his or her camera behind the set, but it’s conceivable. Such a shot would reveal that the set, which we have taken to be real, is in fact artificial—we might see the wooden

51

supports holding up the walls, the lighting stands and a lot of electrical cords, the outer walls of the soundstage, and so on—and as such the shot would call attention to the fictional nature of what we’ve been seeing until that point in the film. That recognition is, of course, something that classical Hollywood cinema avoids. And because it does not have to do with the world of the film’s story, the space behind the set is nondiegetic.

The sixth zone of offscreen space exists only in the imagination. We know that there is real space behind the camera, but the camera can never record it. Just as we don’t have eyes in the back of our heads, so the camera can never have a separate lens that records the space behind itself. Only a second camera recording the first camera could record that space, but the space behind the second camera—the offscreen space Burch defines as being behind the camera—would be equally impossible for the second camera to record. Clearly, this impossible-to-record space is nondiegetic. It doesn’t have to do with the film’s fictional story but instead exists only in the world of the real people who are making the movie.

Individual shots could record the first five of Burch’s offscreen spaces. Using the All About Eve example, the director could conceivably have cut from Bette Davis’s character, the actress Margo Channing, to a shot of her tablemate to the left, her tablemate to the right, her legs under the table, the space above her head, and a final shot of the space behind the banquet room set. But by moving the camera, a director can actually reveal all five of the possible-to-record spaces in a single shot. By panning left and right, he could have shown us the spaces on either side of the character. By tilting up and down, he would have shown us the floor and the ceiling (or lack thereof—most sets have no ceiling so as to accommodate overhead lighting equipment). And by tracking laterally, then forward and around the walls of the banquet hall, he could have revealed the space behind the set. (Admittedly, following the logic of the impossible space behind the camera, none of the offscreen spaces can ever be recorded as long as they are truly offscreen spaces, but that’s a subject for an upper-level film theory course to pursue.)

In short, mobile framing enables a director to unify diverse spaces

52

within an individual shot. Even the tiniest, most minute readjustment, or REFRAMING, reveals and maintains spatial continuity from image to image without cutting. At the end of City Lights, which is analyzed in more detail in chapter 4, the director, Charles Chaplin, begins one shot by centering on the two characters’ intertwined hands, then reframes the image to center on the Tramp’s face and the flower he holds. What is key in this case, and in most cases of reframing, is the onscreen gesture or look or facial expression that the director wishes to emphasize. If a character moves her head slightly to the right in a close-up, for instance, it’s likely that the director will reframe the shot by moving the camera slightly to the right so that part of her face will not be cut off by the original framing.

Ultimately, camera movement—like any other film technique—is about expressivity. There is no right or wrong way to film anything. Some directors, like Sergei Eisenstein, tend to carve the world up into individual static shots and edit it back together again, though even the famous “Odessa Steps” sequence from Battleship Potemkin contains several camera movements. Other directors, like F. W. Murnau and Max Ophüls are known for their elegant moving-camera work. Their films certainly contain static shots that are edited together, but as directors their style highlights camera movements rather than editing effects. Still others—the majority of contemporary filmmakers, in fact— like Paul Thomas Anderson, Martin Scorsese, Spike Lee, and others —choose to film certain scenes in the form of long takes with elaborate camera movements while others take the form of more rapidly cut sequences.

STUDY GUIDE: ANALYZING CAMERA MOVEMENT

To learn how to analyze camera movement, one must first be aware of camera movement. So get a DVD copy of your favorite movie, find a scene you know already, and watch it closely, this time paying particular attention to the camera movements it contains.

1. Pause the DVD after every camera movement you notice. If you are feeling particularly ambitious, write down each movement as you notice it.

2. Ask yourself the following questions after every pause: (A) What type of camera movement just occurred? Was it a single kind of

movement (for example: a pan right, or a tilt down), or was it a combination

53

of different types (a simultaneous crane down and pan left)? (B) What was the apparent motivation behind the movement? Did the camera

move along with a character? Did it move away from a character? Or did it move seemingly on its own, without regard to a particular character?

(C) To what does the movement draw your attention? (D) What ideas or emotions might it express by maintaining spatial unity? (E) As an aside, consider the offscreen spaces of each image and the

assumptions you make about them. 3. Notice how often you are pausing the film—how often the camera moves. Is there

a pattern of camera movements within the scene? For example, is there a series of tracking shots, or a series of pans? Is there any rhythm created by the way the camera moves?

4. Based solely on this particular scene (and bear in mind that the scene you choose may not be representative of the whole film), would you say that the director favors camera movements over cutting? Can you begin to perceive the director’s overall style in this individual scene, or is it too soon to make such a generalization?

WRITING ABOUT CAMERA MOVEMENT Given all these different terms and theoretical notions, how do you describe on a practical level the camera movements you see onscreen? It’s not difficult; it just takes practice. The more familiar you become with the terminology, the easier it will be to describe and analyze what you notice.

“The camera tracks forward,” “the camera tracks back,” “the camera tracks laterally,” and so on: just describe what you see using the technical terms at your command. “The camera cranes up.” “The camera cranes down.” “The camera cranes up, pans to the left, tilts down, cranes down, and tracks forward . . .” and on and on. However the camera moves, that’s how you describe it. It makes for more precise analytical writing to write, “The camera tracks forward” or “the camera pans left,” rather than fumbling around with “we go ahead” or “we go backwards” or “we turn and see . . .”

Be aware that cameras track with or away from characters. Here’s an example from a Warner Bros. animated cartoon: “The camera tracks to the right with Elmer Fudd as Elmer tracks his prey, the ‘wascally wabbit.’” (Yes, there are tracking shots in animated cartoons. Even though the camera that records each animation cel does not literally move, animators create the effect of all the different kinds of camera movements described in this chapter.)

What follows is a reasonably detailed description of the opening shot of Boogie Nights. Bear in mind that this description concentrates on camera movements. A full analysis of the shot would include many more details about mise-en-scene elements such as lighting, set design, color, costumes, makeup, and figure behavior, not to mention dialogue, music, and other sound effects, not to mention what it all adds up to in terms of meaning:

The first shot of Boogie Nights begins as a long shot of a movie marquee announcing that the film playing in the theater is titled Boogie Nights; the

54

marquee fills the horizontal image, its shape echoing the film’s widescreen aspect ratio of 2.40:1. The camera cranes slightly forward, pans slightly left, and rotates clockwise to reveal in an oblique angle the name of the theater—the Reseda. Just as the name “Reseda” fills the image horizontally, the camera reverses the direction of its movement: it now rotates counterclockwise and cranes down to reveal some people exiting the theater and walking under the marquee. The camera continues its movement by craning down and panning rather rapidly to the left just as a car moves forward on the street next to the theater. The camera continues to pan left and crane down—at one moment, the car, now traveling from right to left across the image, fills the screen—until it is at ground level.

Owing to the camera having panned, the car is now facing away from the camera. As a subtitle appears—“San Fernando Valley 1977”—the car makes a left turn and pulls up in front of a nightclub with a gaudy neon sign that reads: “Hot Traxx.” The camera tracks forward on the street toward the car; the camera operator is evidently using a Steadicam, because the movement is very smooth.

A crowd has gathered in front of the entrance to the club. The driver gets out of the car and raises his arm in a greeting gesture but is momentarily cut out of the image because the camera is moving rapidly forward toward the club’s entrance. The camera continues to track forward until it singles out the nightclub manager, who rushes forward and to the left of the image with his arms outstretched.

In a very rapid movement, the camera circles around the nightclub manager and pans left to reveal the moment at which the manager reaches the driver and his girlfriend, who has also gotten out of the car (an action we only assume has occurred, because we have not actually seen it).

The camera then tracks backward, and the three characters follow the camera as it backs into the doorway of the nightclub, through the small entrance hall, and into the club, at which point the camera makes a left turn; this allows the characters to pass the camera, and as they continue to walk they make a left turn as they head away from it. The camera operator reframes the image with a slight pan left as the characters make a similar adjustment in their direction. For a few seconds, the image is a three-shot of the characters walking away from the camera in silhouette.

The characters then turn right and head toward an as-yet-unseen booth; their destination is revealed in dialogue. The driver and his girlfriend continue walking away, but the manager stops, and the camera stops with him; as he walks back in the direction from which he came, the camera reverses its direction and tracks backward. When he turns toward the left of the image, the camera pans left and tracks forward, following him. He gestures toward the left side of the image, and the camera quickly pans left to reveal a waiter dressed in a striped white shirt and turning away from the camera. The waiter begins to walk toward the back of the club, but the camera quickly pans right away from him and returns to the manager, who is now walking toward the camera while the camera tracks backward.

The manager turns toward his left as he walks, and the camera pauses to allow him to pass it; he then walks away from the camera, and the camera follows him through a crowd of people. He jumps up on the dance floor and greets a man who is dancing there.

The camera follows the manager onto the dance floor and then begins to circle the group of people to whom the manager is speaking: the dancing man, who is

55

white; a woman; and a black man wearing a western shirt. The camera travels in two full 360-degree turns before panning and tracking left, following the waiter in the white shirt, who is walking left in the distance and carrying a tray of drinks.

The camera tracks rapidly left and then slightly forward around some tables full of people and slows down when it nears the driver and his girlfriend, now seated at their booth; the driver is on the left, his girlfriend is roughly in the center of the image, and the waiter is slightly to the right. The couple says something to the waiter, who turns and begins to walk away. The camera follows him for a moment or two, cutting the couple briefly out of the image, but then a young blonde woman on roller skates enters the image from the background. As she passes the waiter, the camera changes direction and begins to track backward until it is more or less at the same position it took during the exchange with the waiter. The woman on skates stops at the table and begins a conversation with the driver and his girlfriend. But the camera is restless and begins to track forward and around the woman on skates. At one moment, the driver’s girlfriend is alone in the image in medium shot facing at a three- quarter angle to the left.

The camera then tracks backward and pans to the left to form what is essentially the reverse angle to the one that first captured the waiter and the couple and was repeated with the woman on skates and the couple: now the woman on skates stands just to the left of center, the driver sits in the center, and his girlfriend sits on the right.1

The woman on skates makes a hopping gesture and turns and skates away from the camera, but the camera quickly follows her. She turns left; the camera pans left with her. The camera tracks left as she skates in that direction, after which she turns and skates toward the background. The camera tracks forward, following her, until she disappears into the crowd in the center of the image. But the camera continues to track forward and pan to the left to reveal a young man wearing a white shirt. The camera finally stops moving as the young man, seen in medium close-up, gazes toward the left. Cut! The first shot of Boogie Nights terminates here.

1. For a definition of the term reverse angle, see SHOT/REVERSE-SHOT PATTERN in the glossary.

56

CHAPTER 3 MISE-EN-SCENE: CINEMATOGRAPHY

MOTION PICTURE PHOTOGRAPHY CINEMATOGRAPHY—photography for motion pictures—is the general term that brings together all the strictly photographic elements that produce the images we see projected on the screen. Lighting devices and their effects; film stocks and the colors or tones they produce; the lenses used to record images on celluloid; the shape of the image, how it is created, and what it means—these all constitute the art of cinematography. This, too, is an aspect of mise-en-scene.

The word cinematography comes from two Greek roots: kinesis (the root of cinema), meaning movement, and grapho, which means to write or record. (Photography is derived from phos, meaning light, and grapho.) Writing with movement and light—it’s a great way to begin to think about the cinematographic content of motion pictures. ASPECT RATIO: FROM 1:33 TO WIDESCREEN ASPECT RATIO is the relation of the width of the rectangular image to its height. As you may remember from the first chapter, silent pictures had an aspect ratio of 1.33:1, a rectangle a third again as wide as it was tall. And while the so-called ACADEMY RATIO, standardized by the Academy of Motion Picture Arts and Sciences in 1932, is usually referred to as being 1.33:1, in point of fact the Academy ratio is 1.37:1 —a very slightly wider rectangle than that of silent films.

After 1932, all Hollywood films were made in the Academy ratio of 1.37:1 until the advent of television in the 1950s. Although TV provided a new market for old movies, it also gave audiences the

57

opportunity to enjoy audiovisual entertainment without leaving their homes. So as a way of offering people something they could not get for free in their living rooms, Hollywood emphasized the enormity of motion picture screens by developing technologies that widened the image far beyond the 1.37:1 Academy ratio.

CINEMASCOPE, introduced by 20th Century-Fox for the biblical drama The Robe (1953), used what is called an ANAMORPHIC LENS on the camera to squeeze a very wide image onto each frame of standardsized film stock and another anamorphic lens on the projector to spread it back out again. CinemaScope’s aspect ratio used to be 2.35:1; it was later adjusted to 2.40:1. PANAVISION, another system with a 2.40:1 ratio, is the most commonly used anamorphic process today.

There were still other WIDESCREEN processes in the 1950s, including CINERAMA and VISTAVISION. Cinerama used three interlocked cameras to record three separate images which, when projected across a specially curved screen, yielded a single continuous widescreen image with an aspect ratio of 2.77:1. The first film released in Cinerama was a display of the process called This Is Cinerama (1952). It was an enormous box office success, but the process proved too cumbersome, not only for filmmakers but also for exhibitors, who had to fit the huge curved screen into their theaters in order to show Cinerama films, which were mostly travelogues. Moreover, every theater had to be outfitted with three separate projection booths, each staffed by two people. Cinerama, in short, was an expensive proposition for exhibitors. Only two narrative films were made in three-strip Cinerama: The Wonderful World of the Brothers Grimm and How the West Was Won (both 1962).1

VistaVision, developed by Paramount Pictures, was first used for the 1954 film White Christmas. Instead of the film frames running vertically on the celluloid, with the sprocket holes on the sides, VistaVision’s frames ran horizontally; the sprocket holes were on the top and bottom of each frame. This system yielded an aspect ratio of 1.5:1, though most VistaVision films used MATTES or MASKING to produce ratios from 1.66:1 and 1.85:1 to 2:1. Still another widescreen process was Todd-AO (named for the producer Mike Todd and the

58

American Optical Company), with an aspect ratio of 2.20:1; Todd-AO was an early 70mm process, whereas the others used 35mm film.

FIGURE 3.1 An anamorphic frame as it looks to the naked eye—Scary Movie 3 (2003). (Photofest)

FIGURE 3.2 The same anamorphic frame as it looks projected onto the screen—Agent Thompson (Ja Rule) and President Harris (Leslie Nielsen). (Photofest)

FIGURE 3.3 Cinerama with an aspect ratio of 2.77:1.

59

FIGURE 3.4 VistaVision with an aspect ratio of 1.85:1.

FIGURE 3.5 The full frame at the Academy ratio, 1.37:1. Knowing that the top and bottom portions of the image will be masked when the film is projected, the director doesn’t care that the boom is visible at the top of the image and electrical cords can be seen at the bottom.

FIGURE 3.6 The same frame masked at 1.85:1; the hatch marks indicate the parts of the image that were photographed during shooting but are masked when projected onto the screen. Now that the image has been properly masked, the composition makes visual sense;

60

the filmmaking equipment still exists on the celluloid, but it can’t be seen on the screen.

Simply masking the image was—and continues to be—the easiest

way to produce a widescreen effect. Masking means covering the top and bottom of the image with an aperture plate in the projector in order to produce any of a number of widescreen aspect ratios. ASPECT RATIO: FORM AND MEANING Unless you are an aspiring film director or CINEMATOGRAPHER, you may find yourself asking what the point of all these aspect ratios is. If learning about the variety of cinematic aspect ratios were just a matter of memorizing technical specifications, the lesson would be all but useless. But the shape of every film is basic to its expressive meaning. Each aspect ratio yields a different way of looking at the world. It is meaning, rather than just numbers, that’s important.

If you watch films on DVD, you may have come across the terms LETTERBOX and LETTERBOXING. Letterboxing means preserving the original aspect ratio of a widescreen film when transferring the film to DVD or broadcasting it on television. Letterboxed films have blank areas above and below the image, making the image’s shape resemble a business envelope, or letter. Despite the rise in sales of widescreen, flatscreen televisions, many TVs still maintain an aspect ratio of 1.33:1, and broadcasters as well as video distributors often simply chop off the sides of widescreen films in order to fit the image to the almost-square screen. Rather like slashing off the edges of a painting so it fits a preexisting frame, this crude practice ruins movies. When you watch films made after 1954 on DVD, you should always make sure that they are letterboxed.

To see why, consider the differences between figures 3.7 and 3.8: In figure 3.7, a woman is seen in close-up. We can read her facial

expression clearly: she is frightened at what she is seeing. But because the aspect ratio is 1.37:1, there is no room in the image to maintain both a close proximity to her face and, at the same time, to see what she is looking at. To reveal that information, the director

61

would have either to move the camera or cut to another shot. Figure 3.8, with an aspect ratio of 2.40:1, not only provides the emotional intensity of the woman’s closeup but also reveals what she is seeing in the same image. Please note: neither image is superior. There is no single, correct way of framing and filming this shot, or any shot. One director may prefer to shoot in the narrower aspect ratio of 1.37:1, while another may choose the wider, more expansive 2.40:1. It’s a matter of personal style. There is, however, a single and correct way of showing a given film: if it was filmed with an aspect ratio of 2.40:1, it should be shown and seen only in 2.40:1.

FIGURE 3.7 A non-letterboxed video image, chopped off so it conforms to the shape of a television screen.

FIGURE 3.8 Properly letterboxed on video and restored to its original cinematic aspect ratio, the same shot makes narrative as well as visual sense once more.

62

The point is this: each aspect ratio brings with it a set of aesthetic,

expressive consequences. As a key facet of mise-en-scene, each film’s aspect ratio presents a kind of creative limitation to the filmmakers—rather like the way in which the 14-line, iambic pentameter form of a sonnet offers a poet a strict framework in which to write. If, for example, a director chooses to film in Panavision’s 2.40:1 aspect ratio but wants to create the impression of cramped spaces, she must follow through by somehow counterbalancing the expansive nature of the extra-widescreen process—crowding her sets with objects, say, and blocking her actors to move closer to one another. (Blocking is a term derived from theater and simply means planning where and when actors move around the stage or film set.) It’s all a question of composition—the arrangement of people and things within the rectangular frame. Conversely, a director may choose to film in 1.37:1, the Academy ratio, and yet create the impression of spatial emptiness by consistently framing his characters at a distance from one another, keeping the set design relatively spare, and refraining from image-filling devices such as close-ups. Once again, it’s a matter of creating expressive compositions that are coherent and meaningful throughout the film.

As you begin to notice the aspect ratio of the films you see, ask yourself what relation the shape of the image has to what you’re feeling and what the film is subtly expressing simply by way of its shape. LIGHTING Consider how futile it would be to film a scene set in a pitch black room. There can be no cinematography—or still photography, for that matter—without light. The light source may be only a flaming match, some sparks, the tip of a cigarette, or a bit of light shining in through the gap between a door and the floor, but without some form of lighting nothing can be registered on film.

Film studies, at least on the introductory level, is less concerned

63

with the technical means by which lighting effects are created than with their expressive results. Your goal at this point is not to try to determine how a given effect was created by a cinematographer but rather to begin to appreciate the way in which that effect bears meaning. Imagine filming a scene set in a courtroom, where a woman stands accused of murdering her husband. You could light the scene any number of ways, depending on the particular story you were telling and the mood you wanted to create. You might suggest that justice was being served, for instance, by lighting the room brightly and warmly, with sun streaming in through unblinded windows. On the other hand, you might want to generate some visual tension to suggest that the defendant has been unjustly charged with the crime, in which case you could light the room very unevenly; a sort of gray light could shine through half-closed Venetian blinds in the windows, and a few bluish fluorescent lights overhead could shine down on the courtroom to create a room full of shadows. One thing you would probably not want to do is simply use the lights that are available in a real courtroom and nothing more. Why not? Because the expressive results would be left entirely to chance. You might find, when you get the film back from the processing lab, that what looked great in a real courtroom doesn’t look nearly as good on film: shadows that weren’t noticeable in the real room may appear in the filmed scene, and there might be areas of overexposure as well. Even more important, the available lighting in a real courtroom might not express artistically what you want to say. THREE-POINT LIGHTING The first motion pictures were lit by the most powerful light source in the solar system—the sun. Not only were many of these movies set outdoors, but even interior sets, too, were constructed in the open air simply so that there would be strong enough lighting to register images clearly on film. The development of more light-sensitive film stocks, together with more powerful electric lights, enabled filmmakers to be much less dependent on direct sunlight as their chief source of

64

illumination. The most basic lighting setup is known as three-point lighting, which

consists of a key light, a fill light, and a backlight. The key light (A in fig. 3.9) aims directly at the subject—most likely the main character or object in the shot—and is the brightest light source for the shot. The fill light (B) is a softer light, and is usually placed opposite the key light; the fill light cuts down on shadows created by the bright key light. And the backlight (C) shines behind the subject or object, separating him, her, or it from the background—in other words, enhancing the sense of depth in the shot. Backlighting sometimes creates a halo effect around a character’s head, particularly at the edges of the hair.

The term key light is the source of two commonly used adjectives: low key and high key. To call something high key is to say that it’s intense, whereas low key means subdued. The overly cheerful atmosphere of a television game show would be described as high key, whereas the smoky mood of a jazz club would be called low key. These expressions come from cinematography. When cinematographers, also known as directors of photography (DPs) use a high proportion of fill light to key light, it’s called high-key lighting; the effect is both brighter and more even than when they use a low proportion of fill light to key light, which is called low-key lighting. The lower key the light, the more shadowy the effect. The distorting, spooky nature of extremely low-key lighting is perfectly illustrated by a trick almost every child has played: in the dark, you shine a flashlight up at your face from below your chin. That flashlight was your key light, and since there was no fill light at all, the proportion of fill to key was as low as you could get.

To hammer the point home: the object is not to determine where the key light was on the set, or what the real proportion of fill light to key light was, or whether there was some top lighting (lighting from overhead). Instead, your goals are to notice the effects of the lighting in a given scene, to describe these effects accurately, and to venture analytically some ideas about what the effects mean. See the Study Guide, below, for tips on how to reach these goals.

65

FIGURE 3.9 Three-point lighting: key light (A), fill light (B), backlight (C).

Bright, even high-key lighting is often used in comedies and

musicals to enhance a sense of liveliness and high energy (fig. 3.10).2 Conversely, the shadows created by low-key lighting (fig. 3.11) work so well in mysteries and horror films that they have long been an important convention of those GENRES. And of course many films use a combination of high-key and low-key setups, depending on the nature of the individual scene. Imagine a western outlaw, for instance, walking from a brilliantly lit, high-key exterior into a darker, more low- key saloon. The director might be contrasting the external world of bright nature with the confining, darker, interior world of civilization.

66

FIGURES 3.10 High-key lighting and low-key lighting: In the publicity still from Ernst Lubitsch’s musical-comedy The Merry Widow (1934) (fig. 3.10), the image is ablaze in brilliant, even light shining not only on the character (Jeanette McDonald) but also on her surroundings—the carpet, the walls, and the ridiculous bed. The fact that the set is almost completely white only adds to the image’s luminous quality; had the fill lighting been less intense, the effect of all that whiteness would have been greatly diminished by shadows.

It’s important to understand that the high-key exterior—in this case,

the outside of the saloon—was probably filmed using artificial lighting as well as actual sunlight; audiences never see the many REFLECTORS and LAMPS that the filmmakers have aimed at the outlaw character as he walks toward the saloon’s swinging wooden doors so as to even out the shadows that would have resulted if the bright midday sun overhead had been the only light source.

67

FIGURES 3.11 The still from John Ford’s downbeat drama The Fugitive (1947)(fig. 3.11) demonstrates the opposite effect of low-key lighting. Here, the key light shines onto only half the face of the central character (Henry Fonda), leaving the other half in darkness. The shadow of the prison bar could have been reduced or even eliminated had Ford wanted the character’s face to be brightly and evenly lit, but that was clearly not his expressive intent. And there is little if any backlighting, so the top of the character’s head all but blends in with the completely unlit back wall.

FILM STOCKS: SUPER 8 TO 70MM TO VIDEO DPs are in charge of selecting the type of FILM STOCK—raw, unexposed footage—that is used to make a given film. Film stock is categorized in several ways: (1) by gauge; (2) by type; (and 3) by exposure index. The film’s gauge is simply its width, which ranges in standardized increments from 70mm, the widest, through 65mm, 35mm, and 16mm, to the narrowest—Super 8mm and 8mm. (In point of fact, 70mm films are actually shot in 65mm; the added five millimeters comes from the soundtrack.) The gauge for most theatrical film releases is 35mm. In general, the wider the gauge, the better the picture quality. Motion picture film contains a flexible base—it has to be flexible so it can run through cameras and projectors—the surface of which is covered with

68

an emulsion. So when the film is shot, a wider gauge has more surface area, and therefore more emulsion on which to record the light patterns that form the image. Second, when the film is projected, a wider gauge has more surface area through which to shine the projector’s light beam, thereby yielding a clearer, less grainy image than a narrow gauge film projected on the same large screen. As with snapshots, the more you attempt to blow up a small image to greater and greater proportions, the grainier the result will be.

There are five types of film stock: black-and-white negative, color negative, intermediate stock, black-and-white print, and color print. Negative film is film that is run through a camera and exposed to light frame by frame. The images contained on it are inverted in two ways: they are upside down, and their tones are reversed. Black-and- white negative has dark areas in place of light areas, and vice versa, and color negative has magenta in place of green, yellow in place of blue, and cyan (bluegreen) in place of red. Processing this film creates a positive print, which reverses the tones back to normal. Intermediate stocks—internegative and interpositive—are used during postproduction.

Exposure index refers to the film’s sensitivity to light. It’s also called film speed. So-called slow film requires longer exposure to light, whereas fast film needs a much briefer exposure. A key consequence of the decision to use a faster or slower stock is the degree of contrast one desires. Faster stock yields higher contrast—bright areas are very bright and dark areas are very dark—whereas slower stock yields lower contrast. BLACK, WHITE, GRAY, AND COLOR Almost all narrative films produced today are shot in color. Today’s audiences are so used to seeing color cinematography that they sometimes fail to appreciate the beauty and range of black-and-white filmmaking.

It may surprise you to learn that many silent films were released in color, though not the kind of color you are used to seeing. Through the

69

related processes of tinting and toning, cinematographers turned the grays into equally monochromatic blues or sepias (light browns) or reds. Tinting means dyeing the film’s base; toning means dyeing one component in the emulsion. Interiors and landscapes would sometimes be tinted sepia. Night scenes were often tinted or toned blue, and in fact this convention became so standard that such scenes were often filmed in broad daylight with the expectation that they would be tinted later.

Color cinematography began, in a way, in 1903, when a French film company, Pathé, began hand-stenciling colors onto each frame of film. Color film stock, however, was not commonly used in commercial filmmaking until the 1930s and 1940s, when a famous company called Technicolor began to put its three-color processing system to wide use. Technicolor, which was actually founded in 1915, used first a two-color and later, much more extensively, a three-strip system to render supersaturated colors on motion picture film. Light entered the camera’s lens, but instead of registering an image more or less directly onto film stock, it passed instead through a prism that separated it into red, green, and blue values, each of which was registered on its own stock; later, these strips were dyed and processed onto a single strip of color stock. (In point of fact, calling Technicolor film “color stock” is inaccurate; the process originally used black-and-white stock with filters for each of three colors, and release prints were made on blank, black-and-white stock on which color dyes were stamped. Color negative was not used until the late 1940s and 1950s.)

Black-and-white cinematography is inaccurately named. In fact, it’s monochromatic, meaning that it is comprised of a single color— usually gray—in a variety of shades. From the late 1930s to the 1960s, filmmakers saw the difference between black and white or color cinematography as a matter of choice, the decision being based on economic as well as aesthetic factors. Technicolor was an expensive process, adding about 25 percent more to the cost of an average film production. At the same time, cost was not the only issue, since certain genres—most notably the gritty, urban genre known as film noir—worked particularly well in black and white,

70

whereas others—lavish musicals and costume dramas, for instance— were better suited to color. The development of Eastman Color, a simpler and less expensive color system, in 1952 led to more and more films being made in color, and eventually artistic choice gave way to audience expectation. At this point in the twenty-first century, black-andwhite cinematography is rarely employed in commercial filmmaking. A WORD OR TWO ABOUT LENSES Because most if not all of the movies you have seen in your life have not been full of distorted visuals—they don’t reflect the world the way a funhouse mirror would—you may assume that there is such a thing as a normal lens, and that most films are shot with it. But this is not the case. In fact, most filmmakers use a wide variety of lenses during the course of shooting an individual film. While most of these lenses yield images that resemble “normal” human vision, the word normal means something extremely varied in this context; the human eye is remarkably flexible and creates an enormous variety of effects without our being aware of the minute physical changes the eye performs. Lenses are not nearly as adaptable. They are hard pieces of unbendable glass, and they need to be changed frequently to create the impression of apparently simple, socalled “normal” vision.

Look at the book you are reading. Focus on the print of this word in this sentence. Notice that even the other words surrounding it are somewhat out of focus, let alone the edges of the book, the desk at which you are sitting, and the surrounding room. There is nothing abnormal about this particular view of the world, is there? It’s specific, but it’s not at all unusual.

Now focus on a whole paragraph. The print is in focus (more or less), but individual words are somehow not. This view is also particular but not strange looking. Look at the page as a whole with this concept in mind. Look up from the book, focus on an object nearby, and notice that while you can see the rest of the room, the object you have selected seems clearer than things in the

71

background. Look out a window (if you can) at something or someone in the distance, and literally see the way your eyes automatically bring that object or person into clear focus while the objects in the room become fuzzier, less distinct. All of these visuals are your eyes’ successful attempts to select objects for your attention. None of them is distorted in any way, but each of them is different.

In order to guide your eyes to particular people and things onscreen, filmmakers must use individual lenses that are suited to certain applications. There is no need for you to learn the technical terms for most of the lenses used in motion picture production, but a few of the basic concepts are worth knowing.

Just as your eyes keep objects at a certain distance in focus while the rest of the room seems to blur, so every lens has what is known as a specific depth of field—the area of the image between foreground and background that remains in focus. If, for example, a filmmaker wants to direct your attention to a particular row of spectators in a crowded football stadium, she and her DP might choose a lens with that particular depth of field; everyone behind that row and in front of that row would be slightly out of focus.

In some instances, filmmakers choose to shoot in what is called deep focus, which means that objects near the camera, midway, and far from the camera are all in sharp focus. In a deep focus shot, we can see objects and/or characters in all three planes in equally clear focus. A number of shots in Citizen Kane are in deep focus; Citizen Kane’s cinematographer, Gregg Toland, pioneered the technique.

Lenses run in a spectrum from WIDE-ANGLE TO TELEPHOTO. A wide- angle lens has a wide depth of field, which means that objects in the foreground, middleground, and background are all in focus. A telephoto lens, on the other hand, has a narrow depth of field, as it appears to bring distant objects closer. A lens that is capable of shifting from wide-angle to telephoto and back is called a ZOOM LENS. Unlike forward and reverse tracking shots, zoom shots do not involve moving the camera through space. And finally, a rack focus shot (sometimes called a focus pull ) changes the plane of focus within the shot by way of a change in focus, rather than by way of a zoom. For instance, a character in the foreground is in sharp focus; he hears a

72

noise and turns to look at the far background; the focus then shifts— the man becomes blurry while a car exploding in the distance becomes clear. Notice the difference between a zoom and a rack focus: in a zoom, all the depth planes remain in focus, whereas in a rack focus only one plane at a time is clear and sharp.

FIGURE 3.12 Deep focus in Citizen Kane (1941): all planes of depth are in focus—the woman in the foreground, the man standing in the near-middle ground, the window frame and wall in the background, and even the boy playing outside in the snow in the far background (frame enlargement).

STUDY GUIDE: ANALYZING CINEMATOGRAPHY

To begin to analyze the various components of cinematography, get a film on DVD, randomly select a scene, and freeze frame any image that strikes you as worth pursuing in detail.

First, notice the aspect ratio. Can you tell what aspect ratio it is by sight? Probably not, though you can certainly distinguish a widescreen film from one shot in the Academy ratio.

Although the use of freeze framing has its drawbacks—what, after all, is a motion picture without motion?—you can tell a lot about a shot from an individual frame. In terms of composition, how do the people and objects in the image relate to the film’s aspect ratio? Are they centered, or clustered on one side, or spread out across the

73

expanse of the screen? (If it’s a widescreen film, imagine what it would look like if it hadn’t been letterboxed. What would have been cut out?)

What about the lighting? Would you describe it as high key or low key? If there are shadows, where do they fall? Is the lighting naturalistic or not? In other words, does it look like the world you live in, or have the filmmakers juiced it up with lighting effects? Write your responses to these and other issues in a notebook, because you may need them for reference in the writing exercise that follows.

Now examine the image with an eye toward the effects created by the camera’s lens. Is the entire image in the same sharp focus, or is one depth plane clearer than the others? Can you even distinguish the foreground from the background, or is the shot one that contains only one plane? (If the shot is a close-up, for instance, there can be only one plane because the image contains only one object or face.) WRITING ABOUT CINEMATOGRAPHY Given the fact that this chapter is the most technically oriented one so far, what with its array of cinematographic facts and figures, you may be overwhelmed by the idea of writing about this aspect of filmmaking. In fact, the challenge of writing about cinematography is the same as writing critically about movies as a whole: you must find a way not only to describe what you see clearly but also to figure out what the filmmakers are expressing in terms of meaning.

One of the problems beginning film students face is how to move beyond simple description to meaningful analysis. Just as it’s not enough to provide a summary of the film’s story and pass it off as criticism, a simple description of a scene’s cinematography won’t cut it either. You must learn how to find meaning onscreen—in this case, through cinematography.

Don’t be afraid to state the obvious. Or, better, begin with the obvious—what does the film look like?—and use reason to explore the implications of those visuals. Take the example of the western outlaw entering the saloon, above, and break down the observations made about it into basic logical steps that lead to a conclusion about what the scene is actually about:

1. The character moves from one type of lighting (bright, high key) to another (dim, low key).

2. The character is an outlaw—a person who operates outside the laws of society. 3. The character moves from outside to inside. 4. Outside is full of light and space; inside is shadowy and confining. 5. Conclusion: by way of this shift in cinematography, the filmmaker is actually

contrasting the openness and freedom of nature with the constraints of society. In this case, the conclusion is not particularly profound, nor is it unexpected. As they say, it isn’t rocket science. But it gets the job done, which is to say that it moves from simple, surface visual description to deeper analytical thinking. It identifies and ascribes thematic meaning to formal artistic decisions. In short, it’s a start.

Go over your notes from the study guides and writing exercises in the previous two

74

chapters. Note whether you included any descriptions of cinematographic elements along with your observations about mise-en-scene and camera movement. What visual information about lighting did you fail, however inadvertently, to notice and specify in your notes? If you have the time and inclination, compare your previous notes with the notes you took for this chapter. See what you have learned to see.

Most important of all: ask yourself how you can use the raw material of your descriptions to draw inferences and implications about what the films are about. Based on what you know so far, and using reason to guide you, what can you logically conclude about what each unique combination of mise-en-scene, camera movement, and cinematography means?

1. Cinerama continued as a trade name used for single-strip widescreen films such as Stanley Kubrick’s 2001: A Space Odyssey. 2. Grammar note: Do the terms high key and low key require hyphens? The answer is: yes and no. It depends where the term is placed in a sentence. If it is used before a noun, such as high-key lighting, the answer is yes; if it is used after a noun, such as the lighting is low key, the answer is no.

75

CHAPTER 4 EDITING: FROM SHOT TO SHOT

TRANSITIONS With all but a very few exceptions, films—especially narrative feature films—are made up of a series of individual shots that filmmakers connect in a formal, systematic, and expressive way. There are practical as well as artistic reasons for directors to assemble movies from many hundreds if not thousands of shots. For one thing, film cameras are able to hold only a limited amount of celluloid film—not enough for a featurelength motion picture. (Digital cameras, however, can capture multiple hours.) More important, narrative films generally compress time considerably by leaving out the boring parts of the stories they tell. Imagine how dull it would be to watch even the most intriguing characters go through the humdrum motions of everyday life —doing the laundry, brushing their teeth, spending an hour stuck in traffic—simply because the filmmaker had no way of eliminating these necessary but irrelevant activities. Even those rare films that try to duplicate real time—the story of two hours in a woman’s life could conceivably take exactly two hours to tell on film—generally require the filmmaker to carve up the action into discrete shots and reassemble them coherently, if only to hold the audience’s visual interest, let alone to make expressive points by way of close-ups, long shots, high- and low-angle shots, and so on.

76

FIGURE 4.1 Editors work manually at an editing table in this undated photograph. Now, most editing takes place on computers. (Photofest)

Alfred Hitchcock’s 1948 film, Rope, is an attempt to film an entire

feature-length narrative in a single shot. The fact that film magazines (lightproof containers that hold, feed, and take up film in the camera) of that era could only hold about ten minutes of film was a big constraint, but Hitchcock uses two devices to mask the technically necessary edits: he makes straight cuts at certain reel changeovers and tracks forward into the backs of men wearing dark suits in order to black out the image before cutting at certain others. Still, by moving the camera and reframing the image within these 5- to 10-minute shots, Hitchcock effectively carves each shot into discrete units for expressive purposes.

This chapter describes the methods by which filmmakers link

77

individual shots to one another in a process called EDITING, or CUTTING. These links are broadly called TRANSITIONS.

The simplest transition is the CUT. A director films a shot, the basic unit of filmmaking, and has it developed. She films another shot and has it developed as well. She trims each shot down to the length she wants, and she attaches the two strips of film together with a piece of tape. That’s it: she has cut from one shot to another. In this example, the filmmaker is using celluloid. She can create the same effect electronically with two shots taken in video, though in that case, of course, she has no need for tape.

Bear in mind that editing is a human activity. Unlike the camera’s mechanical recording of images, editing is quite specifically a matter of active decision-making—the product of human choice. So when describing editing, it makes no sense to say or write “the camera cuts.” Cameras can only record; directors and editors cut.

Other important transitions include the FADE-IN and FADE-OUT; the IRIS- IN and IRIS-OUT; the DISSOLVE, and the WIPE, but because these effects are used mostly as transitions from scene to scene—in other words, from the final shot of one scene to the first shot of the next scene— and this chapter concerns transitions from shot to shot within a scene, let’s postpone describing them until chapter 6. MONTAGE One of the key terms in film studies is MONTAGE. Taken from the French verb monter, meaning to assemble, montage describes the various ways in which filmmakers string individual shots together to form a series.

The term montage has three different but related definitions. The first definition is the easiest. In France, the word montage simply means editing—any kind of editing. As described in the example of a simple cut, above, the filmmaker takes two pieces of exposed and processed celluloid, trims them down to the length she wants— decisions made on the basis of the expressive and/or graphic content of the image, or the dialogue, or a combination of both—and literally

78

tapes them together. In France, what she has done is known as montage.

In the United States, the term montage refers more specifically to a film sequence that relies on editing to condense or expand action, space, or time. The effect is often that of a rapid-fire series of interrelated images. Imagine that a director is telling the story of a rock band that forms in Omaha, and he needs to move them quickly to Hollywood, where they will perform live on a television show. Since there is neither the need nor the time to watch the group drive the entire way from eastern Nebraska to southern California, our director begins by filming a shot of the band members packing up their van in Omaha; he cuts quickly from this shot to a shot of the van on the interstate making its way across the Great Plains. From this he cuts to a shot of oil derricks next to the highway, then to a shot of cattle in a field, and then to a shot of the van heading toward the snowcapped Rockies. Cut to a shot of the band members in the van; cut to a shot of the van driving down the Las Vegas Strip at night. An image of Death Valley follows. From the desert the director cuts to a shot of a sign reading “Los Angeles—30 miles” and then to a shot of the van pulling up at an office building on Sunset Boulevard.

In this American-style montage sequence, the band has moved all the way from the Midwest to L.A. in less than a minute. This montage condenses time and space—a 1,700-mile trip that would take several days in real time shrinks down in screen time to about 45 seconds.

Here’s an example of the way in which an American-style montage can expand time and space: Imagine a pitcher on the mound of a baseball field preparing to fire a fastball to the catcher. But instead of presenting the pitch in one single shot taken from high in the stands, the director assembles an American-style montage sequence in order to enhance the game’s suspense: a full shot of the pitcher winding up; a long shot of the crowd in the bleachers; a medium shot of the manager looking tense in the dugout; another shot of the pitcher, this one a close-up, a moment later in his windup; a shot of a middle-aged guy watching the game on television in his den; a long shot of a group of fans beginning to stand up; a medium shot of the batter looking defiant; a close-up of the ball leaving the pitcher’s fist; a full shot of the

79

batter beginning to swing; a shot of the ball hurtling across the screen . . . None of these shots needs to be in slow motion for real time to be stretched out in reel time by virtue of montage. By assembling an American-style montage in this manner, the filmmaker has expanded an action that would take only a few seconds in real time into a 60- second montage.

There’s a third definition of montage, and it is the most complicated to describe and comprehend. In the Soviet cinema of the immediate post-Revolutionary period—which is to say the twentieth century’s late ’teens and ’20s—filmmakers conducted a fierce debate about the nature and effects of montage. Soviet filmmakers were excited by the 1917 Marxist revolution that sought to transform their country from a feudal state to a modern industrial empire overnight, and they wanted to find ways of expressing this political energy on film. The key filmmakers involved in this blend of polemical debate and cinematic practice were Sergei Eisenstein, Vsevolod Pudovkin, and Dziga Vertov. Pudovkin believed that shots were like bricks that were carefully placed, one by one, to form a kind of cinematic wall, and that montage was effectively the cement that held them together; the resulting film, like a wall, was more meaningful than the simple sum of its bricklike parts because montage added meaning to the individual shots’ content. Vertov, being essentially a documentarian, was not interested in the narrative cohesion montage could produce; his most famous film, Man with a Movie Camera (1929), is a kaleidoscopic assemblage of shots put together with the attitude of a symphonic musician rather than a storyteller.

For Eisenstein, montage meant a kind of dynamic editing used both to expose and explore the dialectics, or oppositional conflicts, of a given situation, and to create in the mind of the viewer a revolutionary synthesis. The most famous example of Soviet-style montage in film history, in fact, is the “Odessa Steps” sequence from Eisenstein’s Battleship Potemkin (1925). The situation Eisenstein depicts is a fictional re-creation of the 1905 uprising of sailors on the eponymous battleship; they mutinied against the harsh czarist government and received a strong measure of popular support from the people of Odessa, who in the sequence in question have gathered on the city

80

steps to voice their solidarity with the sailors. The czar’s soldiers march down the flight of steps and begin firing their guns at the citizens. Not only does Eisenstein edit this sequence of shots very rapidly in order to intensify the sense of conflict between the monarchy and the people, but the compositions within each shot are themselves full of conflict—strong contrasts of lights and darks, lots of diagonal vectors, and so on. There is nothing static about this sequence—not in its editing, not in its individual shots. It is the classic Soviet-style montage.

For Eisenstein, shots were meant to collide; his style of montage was the opposite of smooth, apparently seamless continuity editing (which is defined below). And his goal was to create in the minds of his audience a revolutionary synthesis of all these conflicts—to encourage the viewer, through montage, to think and see in a new and, he hoped, radical way. By editing these conflict-filled shots together in a way that intensifies conflict rather than smoothing it over, Eisenstein hoped to inspire in his audiences a kind of revolutionary thinking. For him, the creative act was not only that of the filmmaker who shoots and assembles the film. An equally creative act is performed by those of us who see the film; we take in all of these images by way of montage and consequently put the pieces together in our own minds in our own ways.

81

FIGURE 4.2 Four images from the “Odessa Steps” sequence, Battleship Potemkin (1925) (frame enlargements).

What links all of these definitions of montage is not only the splicing

together of individual shots. What makes montage worthy of study in any of its three forms and definitions is that it is a fundamentally creative act—the product of artistic decision-making. As the French film theorist André Bazin once wrote, montage yields “the creation of a sense or meaning not proper to the images themselves but derived exclusively from their juxtaposition.” As the rest of the chapter will make clear, editing compounds information and creates evocative associations that form a cornerstone of any film’s expressive meaning. THE KULESHOV EXPERIMENT Film studies illustrates editing’s ability to create new associations and ideas in the viewer’s mind with another example from Soviet cinema— an apolitical example, but one that still neatly describes the way Soviet filmmakers viewed montage as imaginative and dynamic. By splicing together snippets of photographed reality, these filmmakers

82

understood that something new was being created—something that didn’t exist on a brute material plane but did exist in the minds of a movie audience—and only in those minds. The film director and theorist Lev Kuleshov is said to have conducted an experiment involving the effects of montage on an audience’s perception of emotion. He filmed the great Russian actor Ivan Mozhukin in medium close-up, with a sincere-looking but neutral expression on his face. Kuleshov then filmed a shot of a bowl of soup, a shot of a coffin, and a shot of a little girl playing. Figure 4.3 shows how Kuleshov edited the sequence.

Audiences are said to have marveled at the great actor’s extraordinary range and subtle technique. Mozhukin could express great hunger! Mozhukin could express extraordinary grief! Mozhukin could express exactly the kind of pride and joy a parent feels when watching his child at play! What a great actor!

In fact, of course, it was the same shot of Mozhukin, and he wasn’t expressing anything other than neutrality. It was the audience members who provided the emotional content of the sequence simply by making associations in their own minds from one shot to the next.

One of the underappreciated aspects of Kuleshov’s experiment is that Kuleshov didn’t just create emotional content by way of editing. He also defined and constructed three continuous but distinct spaces: Mozhukin and the bowl of soup in one, Mozhukin and the coffin in the second, Mozhukin and the child in the third. The actor was seen as being in the same place as the soup bowl; the same place as the coffin; and the same place as the little girl—spaces created solely by way of editing.

The dirty little secret of the Kuleshov experiment is the fact that nobody is on record as ever having seen the film itself. In point of fact, Kuleshov may never have screened or even filmed the sequence. But then he didn’t have to. He knew it would work. CONTINUITY EDITING Classical Hollywood style, which film studies defines as the set of

83

predominant formal techniques used by most American narrative filmmakers through the twentieth century and to the present day, relies on several editing principles to achieve its central goal: to keep audience members so wrapped up in the fictional world created onscreen that they cease to be conscious of watching a movie and, instead, believe that they are witnessing something real. Whether it’s a romance between two believable characters or an action film with a larger-than-life hero or a horror film featuring a preposterous monster, classical Hollywood films want us to believe that we are watching reality, if only for the duration of the picture.

84

FIGURE 4.3 The Kuleshov experiment created three distinct screen spaces as well as narrative relationships.

For example, have you ever noticed that film characters rarely turn

and look precisely at the camera and speak directly to you in the audience? Although direct addresses from characters to audiences

85

have happened from time to time—Annie Hall (1977) and Wayne’s World (1992) contain notable examples of this violation of formal convention—it’s startling when it occurs precisely because it occurs so rarely. The effect of such direct addresses is to jolt us out of our dreamlike immersion in the film’s story into a sudden awareness of the film’s artificiality: we know we weren’t there when the movie was being filmed, and we know that the character isn’t really talking to us at all. This jolt makes us aware that we’re watching a movie.

Classical Hollywood style strives to avoid calling attention to the means and forms of its own construction. Through strictly formal techniques, Hollywood films attempt to smooth over the many cuts that occur. They try to maintain a sense of spatial unity within each individual sequence. They attempt, to use loftier critical discourse, to efface themselves—to render themselves unnoticeable. The overall term that describes this formal system is CONTINUITY EDITING, also known as INVISIBLE EDITING. Continuity editing is a set of editing practices that establish spatial and/or temporal continuity between shots—in other words, any of the various techniques that filmmakers employ to keep their narratives moving forward logically and smoothly, without jarring disruptions in space or time, and without making the audience aware that they are in fact watching a work of art. Continuity editing strives not only to keep disruptions to a minimum but to actively promote a sense of narrative and spatial coherence and stability in the face of hundreds or even thousands of the discrete bits of celluloid called shots. What are these techniques?

The first set of continuity editing techniques involve ways to downplay the jarring effect of cutting. They are called editing matches. There are three essential ways of matching one shot to another, and they are defined according to how the match is made.

1. Matching on action 2. Eye-line matching 3. Graphic matching

MATCHING ON ACTION occurs when a piece of physical action in the

first shot continues in the second shot. Here’s a simple example: In

86

the first shot, a character opens a door; in the second shot, she goes through the doorway. Her movement provides the continuity that matches the two shots. If the shots are set up well and the editor knows his stuff, the audience will slide visually from the first shot to the second, thanks to the seemingly continuous, apparently uninterrupted movement of the character through the doorway.

Let’s use another baseball game as a more complicated example. The pitcher throws a pitch in Shot 1: we see him hurl the ball from the right side of the screen to the left. In Shot 2, the ball flies into the image from . . . which side? Yes, from the right side of the screen to the left. This makes it appear that it’s the same ball pitched by the same pitcher at the same time. How odd and disruptive it would be if the ball flew from the pitcher’s mound to home plate in one direction in the first shot and entered the succeeding shot from the opposite direction. It would make no sense visually. An experimental filmmaker may choose to create such a disruptive effect, but most narrative filmmakers seek to avoid that kind of visual illogic.

Now the batter takes a swing and connects: it’s a line drive, and the ball goes flying out of the image on the . . . right. When the ball reenters the image in the next shot—the second baseman is waiting for it—where does it enter the image and in which direction is it traveling?1

EYE-LINE MATCHING works on a similar principle, but instead of using the direction of a physical action to determine the way that shots are set up, filmed, and edited together, it’s the direction of characters’ gazes that determines where the camera is placed, in which direction the actors are looking when they’re filmed, and how the two (or more) resulting shots are edited together. Before our pitcher throws the ball to the batter, he takes a long look at the catcher, who uses some hand signals to communicate with him. He then turns to the first baseman to check on whether the runner there was preparing to steal second. The director films the sequence in four shots, and when he edits them all together, these four shots make sense spatially because of eye-line matching: SHOT 1: Full, eye-level shot of PITCHER on mound looking offscreen left.

87

SHOT 2:

Full, eye-level shot of CATCHER crouching and forming hand signals behind batter; the catcher is looking offscreen right. The impression created is that the pitcher and the catcher are looking at each other, even though they are not in the same shot.

SHOT 3: Full, eye-level shot of PITCHER turning on his heels and looking offscreenright.

SHOT 4: Full, eye-level shot of FIRST BASEMAN guarding OPPOSING PLAYER and looking offscreen . . .2

When audiences see this sequence projected as part of an action

sequence in a baseball movie, they will understand that the players are looking at one another. Why? Because the rules of eye-line matching have been respected. Imagine the spatial disorientation the audience would experience if the pitcher was filmed looking in the “wrong” direction; the sequence would make little spatial sense and would be much more challenging to follow. Many people in the audience would be bewildered. Confusion may be a legitimate artistic goal, and a truly radical filmmaker may choose to baffle people to make a point. But that filmmaker would find it difficult to succeed with most commercial moviegoers—a legitimate artistic goal in itself, perhaps, but whoever financed the picture would probably not see it that way.

The term eye-line match can also be used to describe an edit that occurs between a shot of a person and a following shot of an object though there is also a particular term that describes it, too—GLANCE- OBJECT MATCH. Say in the first shot we see a hungry-looking little girl in profile staring toward the left of the image; we would not be jarred or jolted in the slightest to find that the second shot in the sequence contained the image of a large dish of ice cream, and we would assume—given the fact that the director has set up the shots and matched them well—that the little girl is looking at the ice cream. The fact that the little girl was filmed on a Friday afternoon and the ice cream was filmed separately the following Monday would not matter in the slightest: the glance-object match would bring the little girl and the ice cream together spatially and temporally in a meaningful and coherent way.

The term eye-line match may seem odd to describe this two-shot

88

sequence because although the girl’s eyes are directed offscreen left, the dish of ice cream can’t look back at her. But eye-line matching is not limited to two or more sets of eyes. Even with two characters, only one set of eyes needs to look in a certain direction for an eye-line match to be made.

Of the three types of matches, graphic matching may be the most difficult to describe. It refers to matching made on the basis of a compositional element—a door or window frame, for example, or any prominent shape. Graphic matches are made by cutting (or dissolving, fading, or wiping) from one shape in the first shot to a similar shape— in the same relative position in the frame—in the second shot. If instead of the easily catchable line drive straight into the second baseman’s mitt in the example above, the batter had hit a high but long fly ball that appeared to be heading out of the park, the director could have made a point about the batter’s quick demise by matching a shot of the catcher’s empty rounded glove with the similarly sized and shaped fielder’s glove that receives the batter’s ball with a thwack.

89

FIGURE 4.4 A graphic match in 2001: A Space Odyssey (1968): from the prehistoric to the futuristic (frame enlargements).

If the batter had been successful, on the other hand, a graphic

match could have expressed the point by comparing the shape of the flying ball to a round comet hurtling through the sky leaving a fiery trail in its wake. There is a famous graphic match in Stanley Kubrick’s 2001: A Space Odyssey (1968) in which a prehistoric ape tosses a bone in the air in one shot. After it begins to fall to earth in a subsequent shot, Kubrick matches it with a rectangular spaceship, thereby signaling not only the passage of millions of years but also equating a primitive weapon with a futuristic means of space travel. Usually, graphic matches are not so clearly designed to add additional meaning to the sequence; graphic matches, like eye-line matches and matches on action, are generally employed to smooth over cuts rather than call attention to them.

90

THE 180° SYSTEM In addition to these three types of matches, classical Hollywood cinema developed a so-called rule in order to maintain a sense of coherent space within a given film sequence: the 180° rule. Because it is a rule that is often broken, film studies tends more and more to call it the 180° SYSTEM. Terminology aside, the 180° system provides a simple but crucial way for filmmakers to preserve spatial coherence within a given scene.

Imagine a scene taking place in a living room; there are two chairs set at three-quarter angles to one another, and in these chairs sit two women.

The 180° system suggests that the best way for a director to establish and maintain spatial coherence in this scene (or any scene) is to draw an imaginary line across the axis of the action (the middle of the set), dividing it in two. In figure 4.5, the 180° axis is represented by a dotted line.

If the director keeps the camera on one side of the dotted line for the duration of the sequence—and she would probably choose the side that includes the characters’ faces—when the film is edited and projected onto a screen, the woman on the left in the illustration will always be on the right. This seems simple. FIGURE 4.5 The 180° system: the cameras generally stay on one side of the dotted line. See figure 4.6 for the corresponding images shot by each camera.

But what happens if she shoots a shot or two from the other side of

91

the imaginary line?3 Please note: the director can move the camera across the line while

filming without disrupting spatial coherence because the camera movement would make it clear that the space is whole and unified. It is only when cutting across the imaginary line that spatial confusion may occur. SHOT/REVERSE-SHOT PATTERN One of the most common, efficient, and effective editing patterns developed by classical Hollywood cinema is the SHOT/REVERSE-SHOT PATTERN. To define this technique, let’s use the above example of the living room scene illustrated immediately above. The two women are seated in living room chairs set at a three-quarter angle toward each other, and they are having a conversation. Establishing and maintaining the 180° system, the director chooses her first shot to be taken from position 1 (see fig. 4.6). The resulting image onscreen is that of the woman in medium shot facing at a three-quarter angle to the left of the image. Since our director has chosen to film and edit this sequence using a shot/reverseshot pattern, she then positions the camera to film the so-called “reverse angle,” namely from position 2: now the camera faces the other woman, who is seen onscreen in medium shot, also at three-quarter angle, looking toward the right of the screen.

The word “reverse” in this instance does not really mean an absolute reversal of the camera’s place; the camera does not cross over to its truly opposite position because that would mean violating the 180° axis system. Instead, shot/reverse-shot means that the shots alternate not between the two characters but between the two camera positions, one pointing right, the other pointing left. The shot/reverse- shot pattern can be used to reveal both characters in both shots—the camera pointing over the shoulder of one to the face and upper body of the other—or we can see them as individuals appearing to look at each other by virtue of eye-line matching. Or the shots can be imbalanced: in shot 1 we might see over the shoulder of one character

92

to the face and shoulders of another character, while the so-called reverse shot might only be an angled close-up of the first character. The point is that the director shoots an apparent reverse angle while maintaining the 180° axis system, thereby showing both characters from more or less equal but appropriately reverse angles. FIGURE 4.6 The shot/reverse-shot pattern: If the first shot of a shot/reverse-shot pattern is 2A, the second shot—the reverse shot—would be 1A. If the director then cuts to a close-up of the woman on the left (1C), the reverse shot would be 2C. Now imagine editing a scene between these two characters, using the framings indicated.

STUDY GUIDE: ANALYZING SHOT-TO-SHOT EDITING

To learn to analyze editing, you are going to begin with an exercise of your imagination. You are a beginning filmmaker, and your assignment is to film—in exactly three shots—a character whom you will reveal, mostly by way of editing, to be mentally disturbed.

Other filmmakers might take the easy way out and use mise-en-scene elements like makeup (darkened, hollow-looking eyes, for instance, or a lot of stage blood smeared across the face) or a piece of outrageous physical action or dialogue (the character falling down on the ground and rolling around swearing incoherently, and so on). Still another lazy director could telegraph the character’s insanity at the beginning of the sequence by starting off with a close-up of a sign that reads “Pittsburgh Home for the Criminally Insane.”

Not you. You will convey this character’s insanity by way of editing. You have precisely three shots to do it with, too; not one, not two, not four, but three. Yes, the content of each of these three shots will convey information. But your primary task is to consider the shots’ contents in relation to one another.

Give yourself some time to consider the possibilities. Once you have thought about the problem for a few minutes, begin to construct the

three-shot sequence in your mind at the very least and, if you have even the most rudimentary drawing skills, on paper.

What is your first shot? If you must convey a character’s craziness chiefly by way of editing, how do you begin the sequence?

Here are some potential opening shots:

93

1. An extreme close-up of a single eye staring blankly out from the screen. 2. A very high angle shot of a young man standing in an empty room and looking out

the window. 3. A close-up of a young woman’s hand nervously twisting a pencil around, seemingly

unable to stop. Bear in mind that the second shot of the sequence will need to relate in some way to the first shot, and that some form of transition must be employed.

Let’s concentrate on example 2, above, and decide on a second shot with which to follow it. Here are some potential second shots:

2A. An even higher-angle extreme long shot of the young man taken from outside the building. There is no one else in the shot: nobody on the sidewalk outside the building, and nobody in any of the other windows.

2B. A close-up of his foot tapping anxiously and continually on the floor. 2C. Linked by an eye-line match, an extreme close-up of a pigeon on the sidewalk

far below. Notice that not much is happening in our sequence. Except for the foot-tapping and pencil-twisting, there is very little physical action. But as the sequence begins to take shape, the relation of shot to shot begins (or should begin, anyway) to convey a certain uneasiness—a sense of anxiety that is enhanced if not entirely created by the relationship of one shot to another. It’s true that the character’s extreme solitude is expressed by the content of the shots of him looking out the window, but that effect is strongly enhanced by the relationship between the two shots. And the pigeon is disturbing not because it is a pigeon—we might be filming a benign sequence in a city park instead of a three-shot indication of a character’s madness—but because it is seen in relation to the first shot of the young man, and the bird’s relationship to him is unexplained and therefore troubling.

Let’s finish up by following through with 2A, above—the extreme long shot taken from outside the building. What kind of shot would drive home the point of the sequence? It’s your decision. WRITING ABOUT EDITING As you learned in chapter 1, there are seemingly innumerable mise-en-scene elements within a single shot if you look at it closely and carefully enough. Rapidly cut sequences only compound the extraordinary amount of visual information available for analysis. So to begin to learn how to write about shot-to-shot editing, take a DVD of the film of your choice and choose a simple and fairly short sequence to work on—say five to ten shots in a sequence lasting between 30 seconds and 1 minute. Be sure to choose an individual sequence or part of an individual sequence rather than the end of one sequence and the beginning of another; that’s the subject of chapter 6. A sequence is

94

simply a series of interrelated shots that form a coherent unit of dramatic action.) Describe the content of each image thoroughly, but concentrate on the methods by

which the director effects transitions from shot to shot. What type of matching does he or she employ, if any? Does the sequence use a shot/reverse-shot pattern?

Here’s an example drawn from the end of Charles Chaplin’s City Lights (1931). It’s longer than ten shots, but it is one of the most famous sequences in world film history; not only is it exceptionally emotionally satisfying, but it contains a fascinating lapse in continuity:

SHOT 1:

A two-shot of the Tramp and the Girl; the Girl is standing in the doorway of the flower shop on the right side of the screen, and the Tramp is standing on the sidewalk on the left side of the screen. The Girl reaches out with her right hand and offers the Tramp a flower she is holding. The Tramp turns and reaches out to take the flower with his left hand. The flower is in the center of the screen as he takes it. As the Tramp pulls his left hand back with the flower, the Girl steps toward him as he puts the flower in his right hand and quickly pulls him by the left hand toward her. Chaplin cuts on this action to:

SHOT 2:

A closer two-shot taken from the reverse angle. This shot is taken over the Tramp’s right shoulder. The Girl is seen at a three-quarter angle; both characters are in medium shot. The Girl, now holding the Tramp’s left hand in hers, looks the Tramp in the eye and smiles, but as she begins to pat the Tramp’s left hand with her right hand (the Tramp is still holding the flower in his other hand), her expression changes to one of newfound understanding: she recognizes the touch of her previously unseen benefactor’s hand and realizes that this ridiculous homeless man has enabled her vision to be restored. She now appears to be looking not only at him but into him; her gaze is penetrating. Chaplin cuts on the action of the Girl patting the Tramp’s hand to:

SHOT 3:

A reverse-angle shot, taken at a closer distance (but not a close-up) of the two people’s hands clasped in the center of the image. The Tramp’s right hand is raised to his face. The camera tilts up and pans slightly left to reveal a closer medium shot of the Tramp, who is holding the flower in front of the right side of his face, his hand covering his mouth. The side of the Girl’s head is visible on the right side of the image; her left hand is now touching the lapel of the Tramp’s worn jacket. She begins to withdraw her left hand as Chaplin cuts on the action to:

SHOT 4:

A reverse angle two-shot (the same as Shot 2). She now pulls her right hand back and touches her own face in a gesture that indicates comprehension; the weight of her recognition grows. Oddly, the flower the Tramp holds is no longer at the level of his face but is now in front of his chest. The Girl mouths the words, “It’s you?” before Chaplin cuts to:

SHOT 5: A TITLE CARD that reads, “You?” Cut to: More or less the same shot as Shot 3, only a bit closer; now only the barest sliver of the Girl’s head is visible at the very edge of the right-

95

SHOT 6: hand side of the image. The Tramp nods in response to the Girl’s question. The flower is back up at the Tramp’s face, and his index finger appears to be touching his lips. Chaplin cuts on his slight nodding action to:

SHOT 7: The same as Shot 4. The flower is again at chest level. Chaplin cutsrather quickly to:

SHOT 8: The same as Shot 6; the flower is again at the Tramp’s face. He points to his own right eye and mouths the words, “You can see now?” Cut to:

SHOT 9: A title card that reads, “You can see now?” Cut to: SHOT 10: The same as Shot 8. The Tramp smiles. Cut to:

SHOT 11: The same as Shot 7. The flower is once again at chest level as the Girl nods and mouths the word, “Yes.” She swallows, indicating the depth of her emotion, and mouths the words, “I can see now.” Cut to:

SHOT 12: A title card that reads, “Yes, I can see now.” Cut to:

SHOT 13: The same as Shot 11. The Girl appears to mouth the word “yes”twice as she gazes at the Tramp. Chaplin cuts to:

SHOT 14:

More or less the same as Shot 10: Not quite close enough to be a closeup of the Tramp, because not only is his face visible but also his shoulders, but close enough to register the depth of emotion in his face as he begins to giggle with pleasure at the Girl’s recognition of him. The flower is once again at the level of his face, and his hand partially covers his mouth. Fade out.

What do you notice about the sequence? Film scholars have puzzled over the ending of City Lights since the film’s release in 1931; the sequence is one of the most emotionally satisfying endings ever filmed, and yet the director, Chaplin, violates one of the cardinal rules of continuity editing by not matching the position of the flower from shot to shot. Is this simply an error—an editing glitch? Or is there an expressive purpose behind it? Could it be a glitch and still have expressive meaning? These questions could form the basis of a great final paper for your course.

1. If the ball flies out of the right side of the image after the batter hits it, it must enter the image again on the left for the rules of continuity editing to be observed. 2. If the pitcher is looking offscreen right, the first baseman must look offscreen left if we are to believe that the two men are looking at each other. 3. By cutting across the axis and shooting from the other side of the line, the characters when projected would appear to flip from their regular side of the screen to the opposite side. For most audiences, this would be jarring and disruptive.

96

CHAPTER 5 SOUND

A VERY SHORT HISTORY OF FILM SOUND We call them silent movies, those early films that did not have a soundtrack. But they weren’t actually silent. Most motion pictures of that era were screened with some form of live music. In large, urban theaters, exhibitors would often hire a full orchestra to accompany the movies they showed, while in small venues there would simply be a pianist. Organs, too, were commonly used to accompany films in those years. Not only could a single pipe organ or electric organ simulate a variety of instruments from clarinets to violins, but it could also provide a variety of sound effects such as bells and knocks.

It cannot even be said that silent films lacked spoken dialogue. Characters spoke to each other all the time. But instead of hearing their words, audiences read them onscreen in the form of TITLE CARDS —some of those words, anyway, since not every line of dialogue was printed in full. Title cards also conveyed information about characters, when and where scenes were set, and so on.

Experiments with synchronizing the image with audible, recorded dialogue, music, and sound effects began in Hollywood in the 1910s. By the early 1920s, there were two competing systems—SOUND ON FILM and SOUND ON DISC. The latter system recorded sounds on phonograph discs (otherwise known as records), which then had to be cued to begin playing at precisely the correct instant in order to match the images that were projected onscreen. The sound-on-film system, which proved less cumbersome and which ultimately was adopted around the world, records sound onto photographic film in the form of light waves, which are then read optically by the projector and converted back into sound.

97

In October 1927, Warner Bros. released The Jazz Singer, the first feature film with synchronized dialogue and songs. Starring the popular song-and-dance man Al Jolson, it’s the story of a young Orthodox Jew who defies his father by becoming a jazz singer instead of a cantor (a vocalist who sings prayers during Jewish services). Jakie Rabinowitz leaves home, Americanizes his name to Jack Robin, and turns up ten years later at a cabaret, where he sings a synchronized song and then addresses not only the audience onscreen but the movie audience as well: “Wait a minute! Wait a minute! You ain’t heard nothin’ yet!” The Jazz Singer was a successful moneymaker for Warner Bros., and although it took several more years for synchronized sound to become standard in world cinema, the film effectively signaled the end of the so-called silent era and the beginning of feature-length TALKIES, an era that continues today.

Perhaps needless to say, there have been many technological developments since The Jazz Singer—advances in microphones, sound recorders, and speakers—but the details of these improvements are best left to upper-level filmmaking and film history courses. One common term that you might be curious to know a bit about, though, is DOLBY, since many if not most commercial films carry that particular credit and logo. Dolby Laboratories specializes in the noise reduction system invented by Ray Dolby in 1965 for use first in the recording industry and, a few years later, in the cinema. (The first film that used Dolby technology was Stanley Kubrick’s A Clockwork Orange, 1971.) The Dolby system greatly reduces background noises, enhances the clarity of voices, sound effects, and music, and currently offers a total of six separate channels that play sound from speakers placed behind the screen on the left, center, and right, a subwoofer, and surround-sound speakers on the left and right in the auditorium. The digital information for each of these channels is placed, ingeniously, between the sprocket holes of the film.

98

FIGURE 5.1 Soundtracks (left to right): (A) the Sony Dynamic Digital Sound (SDDS) track; (B) sprocket holes; (C) the Dolby Digital track; (D) the analog soundtrack.

RECORDING, RERECORDING, EDITING, AND MIXING The process of creating, manipulating, and playing back cinematic sound is typically long, expensive, and increasingly complex from a technical and personnel standpoint, and since the specific technologies involved in filmmaking are, here as elsewhere, less important to film studies than the meanings generated by the finished films, there is no point in an introductory class to overwhelm you with technical details. Still, you should be aware of the enormous amount of time, effort, and skill that go into every feature film soundtrack you hear.

Just as the creation of the image track begins with light entering the camera’s lens, the soundtrack’s genesis is the sound that enters a microphone. A scene can be miked by way of a BOOM, a glorified broom handle onto which a microphone is attached before being held out over the actors’ heads just out of camera range, or by way of mikes attached to the actors’ bodies, heads, or clothing. RADIO MICROPHONES have the virtue of being small and wireless—and therefore easy to hide. In addition, there are SHOTGUN MICROPHONES

99

that pick up sounds at some distance but must point exactly in the direction of the sound being miked—usually an actor’s speech.

It’s not only dialogue that requires one or more mikes. Sound effects, too, must be picked up by a mike before they can be recorded, as does music. This is not only for amplification. Microphone filters reduce if not entirely eliminate unwanted or unnecessary frequencies, thereby rendering the recorded sound even clearer than the original.

Sound is recorded and edited in either analog or digital form. In an analog system, sounds are recorded onto magnetic tape that is edited in much the same way as the image track; the magnetic tape, like the film, is literally cut into strips of varying lengths and spliced together sequentially. Editors rely on the clacking of the clapboard described in chapter 8 to synchronize the image track with its corresponding magnetic tape. Contemporary digital editing systems rely on a DIGITAL AUDIO WORKSTATION (DAW ), a computer and specialized application that match the digital recording with the image.

Much of the work of creating the soundtrack is done during POST- PRODUCTION, the period after the images have been shot. In a way, the term postproduction is something of a misnomer, precisely because the production process goes on long after the photography has been completed. This is certainly the case with the soundtrack.

FOLEY ARTISTS are sound effects creators who duplicate certain sounds in a special recording studio called a Foley stage or Foley studio. Various kinds of footsteps, for instance, don’t sound quite right when they are recorded at the time they are photographed. It takes a Foley artist, specifically a Foley walker, to re-create a more accurate effect by walking on, say, sand or gravel and recording the sounds that result.

Foley artists and other sound technicians may strive to create or enhance the audience’s sense of realism, but others can be said to work toward a different aesthetic goal. We take film SCORES for granted as a part of our experience of motion pictures, but unless we walk around listening to our iPods all day, our real lives do not have musical accompaniments to set and develop the mood of the moment the way movies do. The effect of a score, then, cannot be considered to be simply “realistic.” Appropriate to the particular film, yes

100

—“realistic,” no. Whether it’s chords from a single guitar or the rich, symphonic sound of a full orchestra, the elements of a film’s musical score augment the audience’s emotional response to the characters, story, and images irrespective of whether the moment is meant to be real-seeming or not.

An original score may be written by a composer, though many films also (or even exclusively) utilize preexisting recorded songs.

Finally, the sound mixer takes all the different components—the dialogue tracks, the sound effects, the score, and so on—and brings them together in an aesthetically balanced way. Adjusting volume and tone by bringing parts of the dialogue up while bringing certain sound effects down, and manipulating the tracks so that sounds seem to move around the theater—these are some of the critical tasks in MIXING. And just as there is no single, correct way to film any given scene, there is no single and correct way to mix sound. It’s all a matter of honoring the director’s vision (or in this case hearing). A horror director, for example, may want her sound mixer to produce an unnerving, echo-like quality on the soundtrack—a quality that might not be what the director of a romantic comedy would want. (Then again, it might be, depending on what the director is trying to achieve.) A composer may create an especially beautiful piece of orchestration, but if the director doesn’t think its use is of value to the scene for which it was written, the sound mixer will remove it from the soundtrack. ANALYTICAL CATEGORIES OF FILM SOUND Because the goal here and throughout Film Studies is not to try to figure out how a given cinematic element—in this case sound—was created, but rather to locate, define, and analyze elements of expressive meaning, you may be wondering how to deal with film sound productively. Let’s begin by categorizing some of the various sounds you actually experience when watching and listening to films.

The three main categories of film sound are dialogue, music, and sound effects. Dialogue includes all the spoken words in a film.

101

Dictionaries define the word dialogue as a conversation between two people (as opposed to monologue—a speech, usually long, by one person). But in film terminology, dialogue refers to any spoken words, including conversations, monologues, random words audible in crowd scenes, and voiceover narration. The term music is self-explanatory, but it is important to remember that in film, music may be diegetic or nondiegetic; it may be sourced within the world of the story (we see someone with an iPod and earphones onscreen and we hear the music she is hearing) or not (we hear music playing as a bear walks alone through the woods). Sound effects are all other noises, including both diegetic and nondiegetic sounds. The crash of ocean waves, birds chirping, a cannon’s boom, comical honking sounds—every noise that isn’t spoken words or music is considered a sound effect, whether it’s diegetic or not.

One key distinction to make among types of film sound is whether it is SYNCHRONOUS or NONSYNCHRONOUS. Synchronous means occurring at the same time, which in cinematic sound terms means that a sound is heard at the same instant as its source appears on the screen. We see a woman’s lips move, and we hear the matching words. A visitor presses a doorbell, and we hear the bell. A high school kid turns on a car radio, and music begins to play. These sounds and images are temporally and spatially matched: synchronous sound.

Not all sounds are synchronous. Nonsynchronous sounds, in contrast, are sounds that occur at a different time and/or in a different space than what appears onscreen. Directors and editors often use nonsynchronous sounds to make cuts from scene to scene smoother, less jarring. These are called SOUND BRIDGES. For instance, at the tail end of one scene, a man is being handcuffed and led away in long shot. Anticipating the next scene, the sound of a cell door clanking shut is heard on the soundtrack for a second or two before the director cuts to a full shot of the man, now a prisoner, in his cell. For the last few seconds of the earlier scene, the clanking sound is nonsynchronous with the image; once the next scene begins, the sounds of the cell become synchronous.

Some nonsynchronous sounds are mismatched with the image, either by intention or by technological failure. When a film print

102

becomes worn, for instance, or has been poorly printed to begin with, characters’ speech may slip out of synch. Film dialogue that has been dubbed into another language is also nonsynchronous—characters’ lips move to form words that are obviously different than those heard on the soundtrack.

A related term is ASYNCHRONOUS SOUND. Whereas the prefix non- means not, the prefix a- means without. Asynchronous sound, therefore, refers to those sounds that are heard without their sources being seen onscreen; the term asynchronous sound means the same thing as the term OFFSCREEN SOUND. For example, a shot of an office located in a large city may be accompanied by faint traffic noises. The cars and buses that are presumably producing the sounds aren’t visible onscreen—the shot is of an interior, and there may not even be a visible window—but the offscreen sounds are understood to be occurring simultaneously with the visible action.

The term offscreen sound leads to yet another: AMBIENT SOUND—the background noises of the scene’s environment. Very few places on Earth are perfectly silent. Car motors run and leaves rustle; pipes rumble and air conditioners hum. When film conversations are shot and recorded, edited, and screened without accompanying ambient sounds, they sound like they are taking place in a weirdly muffled or even totally soundproofed room. Listen to the room in which you are sitting, and hear the particular ambient sounds of that particular space. Noises like the ones you are now hearing are generally recorded separately from the dialogue and added later during mixing. Even in the absence of car horns and alarms, barking dogs in the distance, and the like, every room has what is called ROOM TONE, which is also recorded apart from the dialogue and inserted into conversations to fill any gaps that may otherwise occur.

As you already know, another way of distinguishing cinematic sounds is to categorize them on the basis of whether they are DIEGETIC or NONDIEGETIC—in other words, whether the sounds are sourced in the world of the story or not. Almost all film dialogue is diegetic: characters speaking, whether synchronously, nonsynchronously, or asynchronously, usually do so within the world of the story. For a funny exception to this general rule, see the beginning of the comedy

103

The Girl Can’t Help It (1956), directed by Frank Tashlin. The star of the film, Tom Ewell, appears onscreen, looks toward the camera, and directly addresses the audience: “Ladies and gentlemen, the motion picture you are about to see is a story of music. I play the role of Tom Miller, an agent . . .” Ewell’s dialogue refers to the story but remains outside of it; it is, therefore nondiegetic dialogue.

With the notable exception of so-called backstage musicals, a film’s score is almost always nondiegetic; it is rarely sourced in the world of the story. (Backstage musicals are musicals that take place in the world of theater and film; their characters are themselves performers who are putting on a show, so their songs are often set in a real-world context). In a melodrama, the heartbroken mother of a dying baby begins to cry while, on the soundtrack, we hear the sound of violins. Or in a horror film, someone slowly approaches a closed door behind which lurk a group of flesh-eating zombies, and we hear a couple of beats of tense percussion. The violation of this general rule can be used to great comic effect. In Blake Edwards’ Pink Panther comedy A Shot in the Dark (1964), Peter Sellers, as Inspector Jacques Clouseau, turns up at a nudist colony and walks ludicrously past an equally nude orchestra playing the familiar “Theme from A Shot in the Dark” by Henry Mancini. The film’s score, which had previously been nondiegetic, suddenly and ridiculously becomes diegetically sourced in the world of the story.

In their book The Film Experience, Timothy Corrigan and Patricia White write: “One question offers a simple way to distinguish between diegetic and nondiegetic sound: can the characters in the film hear the sound?” If they can hear it, or if they could conceivably hear it, it’s almost certainly diegetic. For example, a woman who is alone in her bedroom might not hear the faint footsteps of the intruder who is about to kill her, but the intruder can hear them—and we hear them, too: these footsteps are therefore diegetic sounds. The creepy music that accompanies the scene, however, cannot be heard by either the woman or the intruder and is therefore nondiegetic—unless, of course, that particular piece of music is coming from a radio or other sound source in the woman’s bedroom; in that case, both she and the intruder can hear it, which would make it diegetic.

104

Sometimes in a narrative film, a character seemingly speaks directly to the audience without appearing onscreen. This is called voice-over narration, or VO. Otto Preminger’s classic film noir, Laura (1944), for instance, begins (after the opening credits, which are accompanied by the nondiegetic “Theme from Laura”) with a voice-over: a dead man speaks to us from an entirely black screen for several seconds before an image appears. VO is also often used to convey a character’s (otherwise) unspoken thoughts. For example, a shot of a college student looking attentively at his professor, who is discussing nondiegetic sound, may be accompanied by his voice in VO commenting on how boring the class actually is and how he would much rather be on a beach with his girlfriend. SOUND AND SPACE Ever since sound became standard in world cinema in the 1930s, filmmakers have had to construct spaces aurally as well as visually. Because of technological limitations, the earliest synch sound films were especially cumbersome to create, and accordingly, the resulting sound space had an artificial quality. Microphones picked up not just the intended sound but any sound, including that of the camera; as a result, cameras had to be placed in soundproofed boxes. Moreover, all the sounds had to be recorded simultaneously—there were no separate tracks and no mixing—so, if the film was to have an orchestral score, an orchestra had to be present on the set, playing alongside the actors as their dialogue scenes were shot. These technological requirements lent a certain canned quality to the sound of these films. The aural space created made every scene sound like —well, like it had been recorded in a cavernous studio soundstage. Interiors, exteriors—it didn’t matter. It all sounded the same.

Improvements in microphones, cameras, recording devices, playback processes, and speakers have done much more than freed cameras from their soundproofing boxes (not to mention musicians from the set). Because of the clarity of contemporary soundtracks, we are able to perceive and appreciate a more complete environment for

105

every image we see—because we can hear it in minute detail. Whether the sound is synchronous, nonsynchronous, asynchronous, diegetic, or nondiegetic, various properties of the sounds we hear contribute to the creation of aural space.

AMPLITUDE, otherwise known as volume, refers to the loudness or softness of the sound we hear. As always in film studies, it’s the effect of amplitude that counts toward plausibility, not its strict realism. For instance, a scene set at a rock concert may begin with a master shot of the entire arena accompanied by deafening music on the soundtrack, but when the director cuts to a concertgoer saying something to her date in medium two-shot, the volume of the music drops—slightly perhaps, but no less significantly—to allow the audience to hear the character’s dialogue. In reality, of course, musicians and their acoustical engineers don’t accommodate even their most devoted fans’ conversations in this way, but in the movies, audiences not only accept but even expect this particular sound convention. If the amplitude didn’t drop, and the audience couldn’t understand what the characters said to each other, they would likely become angry and frustrated.

The aural space is maintained in this example by virtue of the fact that the change in volume is a subtle change. If the amplitude fell to too low a level during the conversation, the aural space would no longer be plausible, and the scene would begin to seem unrealistic. Bear in mind: this could be a legitimate artistic strategy. Dropping the music’s volume drastically might serve to make the two concertgoers seem to be living in their own world—a romantic world comprised of just the two of them despite the fact that they are in the company of 18,000 concertgoers. There’s a scene in John Woo’s Mission: Impossible II that tests the limits of plausibility in this regard: two characters (Tom Cruise and Thandie Newton) engage in a completely audible conversation while speeding alongside each other in competing convertibles on a winding oceanside highway. Plausible: barely. Realistic: not at all. Appropriate for the film: absolutely.

Amplitude is often used to establish and fortify the audience’s sense of distance. An extreme long shot of a cowboy on horseback, for instance, may be accompanied by the sound of wind and dust blowing

106

on the soundtrack, but if the director were to cut to an extreme close- up of the horse’s hoofs, the likely accompanying sound on the soundtrack would be the sound of hoofs hitting dirt. (These sounds would probably have been created not by the horse but by a Foley artist.) This would not be the case, however, if the next shot in the sequence was of a man looking through a powerful telescope (the object of his gaze presumably being the horse’s hoofs), in which case the implication would be that the watcher was at a far enough distance from the cowboy that he would not hear the hoofbeats—and neither would we.

Variations in amplitude may also indicate a character’s subjective awareness of the world around her. In the Korean horror movie The Host (2006), for instance, a young girl is preoccupied with something that has just happened to a family member when she leaves her father’s snack bar and goes outside to walk in a riverside park. There is relative silence on the soundtrack—that is, until she gradually becomes aware that she is in the midst of hundreds of screaming people who are fleeing a giant mutant river monster. The director, Bong Joon-ho, brings up the sound level of the screaming, thus indicating a shift in the girl’s awareness. (Tragically, she is soon lassoed by the hideous serpent’s tail and spirited away to its foul lair deep in the Seoul sewer system.)

Amplitude is an important signifier of closeness or distance, but it is not the only factor. The term sound perspective describes the aural equivalent of three-dimensional vision. Just as we perceive depth, the third dimension (the other two being height and width), largely on the basis of the relative size of objects in the foreground, middleground, and background, we hear in perspective, too. Sound perspective describes the relative proportion of direct sound and reflected sound. Direct sound is created (directly!) from its source: an actor’s mouth is miked, and the sound it produces is recorded. This is direct sound. Reflected sound is sound that bounces off the floor, ceiling, and walls and reaches the microphone a phase later than direct sound. (Phase refers to the distance between sound waves.) The actor’s voice creates such sounds as well. Sound perspective is created by mixing direct and reflected sound together. A high ratio of

107

direct sound to reflected sound suggests closeness; a low ratio of direct sound to reflected sound suggests distance.

Two other sound properties are pitch and timbre. Pitch is a sound’s fundamental frequency, timbre its tonal quality. Women’s voices are generally pitched higher than men’s, meaning that their vocal cords tend to be smaller and the resulting frequencies higher. A violin playing a particularly high note produces a particularly high pitch; a bass violin’s pitch is much lower. Timbre, meanwhile, covers virtually every other aspect to a given sound—it’s also called tone quality. Does a piece of music sound rich or does it sound tinny? Is a man’s voice nasal? Is a child’s voice shrill? Does a sound effect have a particular depth, or does it sound thin? All of these characteristics, and more, fall under the general category of timbre. Timbre, in short, is difficult if not impossible to define precisely, but we know it when we hear it.

You do not need to become an acoustical engineer to appreciate the variations in pitch and timbre that affect your perception of film sound. Just try to increase your awareness of the differences in recorded human voices that affect your understanding of different characters and the particular characteristics of any film sound you hear. Practice describing what you hear using the concepts of pitch, timbre, and amplitude as a jumping-off point.

STUDY GUIDE: HEARING SOUND, ANALYZING SOUND

The first step in analyzing film sound is, of course, to notice it in the first place. It’s easy to become so distracted by what you’re seeing that you fail to notice what you’re hearing—the particular qualities of a given sound, its relationship to the image that accompanies it, and so on. Consider the information you receive solely by way of sound, as opposed to what you learn solely from the image.

As an exercise in distinguishing sound information from image information, read the following shot breakdown of the opening, pre-credits sequence of John Turturro’s 2005 film, Romance & Cigarettes:

An extreme close-up of a pattern of flesh-colored waves. The camera tracks back to reveal that they are the friction ridges of the big toe on a man’s right foot. The foot twitches. The camera continues to track, now in a circular right lateral movement, around the left side of a gray couch; the foot and leg are surrounded by gray pillows. The darkness

108

SHOT 1: of the room as a whole and the couch area in particular lead to the image being almost entirely indecipherable at one moment of this tracking shot before the man’s bare arms, folded over his beefy chest, become visible. The camera is now positioned over the man’s head; the man (James Gandolfini), whose eyes remain closed throughout the sequence, appears upside down in the image.

SHOT 2: A long shot of a woman entering the shadowy room; the shot is taken from the vicinity of the man’s head. She is in silhouette. As she approaches the bed, the director cuts to:

SHOT 3: A close-up of the man’s face; his mouth twitches.

SHOT 4: A medium shot of the woman, now in dim light. She flicks a lighterand lights a cigarette.

SHOT 5: A POV medium shot, from her POINT OF VIEW, of the man lying onthe couch.

SHOT 6: Same as Shot 4: a medium shot of the woman. The camera tilts down slightly as her hand, now holding the lit cigarette, moves down toward the bottom of the image.

SHOT 7: A slightly closer shot of the man. He is smiling. The camera tracksforward.

SHOT 8: A slightly closer shot of the woman. The camera tracks forward asshe blinks and looks away.

SHOT 9: A close-up of the man’s bare foot. A hand reaches into the image from the top of the frame. The hand carefully places the lit cigarette between the man’s big toe and the toe next to it; the lit end is on the side of the foot’s sole.

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Engineering Help
Calculation Guru
Finance Master
Accounting Homework Help
Financial Hub
Professional Coursework Help
Writer Writer Name Offer Chat
Engineering Help

ONLINE

Engineering Help

I have read your project details and I can provide you QUALITY WORK within your given timeline and budget.

$44 Chat With Writer
Calculation Guru

ONLINE

Calculation Guru

As an experienced writer, I have extensive experience in business writing, report writing, business profile writing, writing business reports and business plans for my clients.

$49 Chat With Writer
Finance Master

ONLINE

Finance Master

I will be delighted to work on your project. As an experienced writer, I can provide you top quality, well researched, concise and error-free work within your provided deadline at very reasonable prices.

$45 Chat With Writer
Accounting Homework Help

ONLINE

Accounting Homework Help

I have read your project description carefully and you will get plagiarism free writing according to your requirements. Thank You

$46 Chat With Writer
Financial Hub

ONLINE

Financial Hub

I have done dissertations, thesis, reports related to these topics, and I cover all the CHAPTERS accordingly and provide proper updates on the project.

$33 Chat With Writer
Professional Coursework Help

ONLINE

Professional Coursework Help

I have worked on wide variety of research papers including; Analytical research paper, Argumentative research paper, Interpretative research, experimental research etc.

$43 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Rotational vibrational spectrum of hcl lab report - Comparing mitosis and meiosis worksheet key - 4.2 pounds in kg - Hindawi publishing corporation predatory - Hildegard von bingen opera - Hello john gotta new motor - Mason gmu edu montecin web eval sites htm - W11D1 - I need 1200 words total (3 pages 1.5 spacing) on BASED UPON THE PAPERS AND LECTURES, PLEASE COMPARE BRIEFLY SOCIAL MARKET - Green computing research project charter - Jordan tire shop on elvis presley - Anne marie hag seed - Hydrogen and nitrogen react to form ammonia - La misma luna summary - Green line project karachi map - Sample soap note for wellness visit - A delicate web like pattern decorates this egg's shell - Complete Phase II of the Marketing Plan. - Marketing Research Essay - Coulomb's law worksheet 15.2 answer key - Tiki girl orthotic thongs stockists - Principle of serological test - Is koi bubble tea halal - Human sexuality - How to wire up driving lights diagram - Drag the labels onto the diagram of the cns meninges - Mgb gt for sale ontario - An aps 137 v multi mission surveillance radar - Drakkar noir myer - Ib math studies critical value table - The four great patriarchs in genesis are - Why is eggshell a good material for edta to chelate - Griffith city cinemas website - Unit 9 discussion - A floating seat cushion - Pythagorean theorem in crime scene investigation - Missing autistic boy casper wy - Minho personality maze runner - All american boys chapter 1 - Barbaras italian restaurant rouse hill - +27737189846,Al Barsha 1®=Safe Affordable Misoprostol 200 mcg Mifepack tablets For Sale in Ajman®Buy MTP KIT, - Frank smith plumbing excel - Hither and thither gift guide - Coastal concern action group - Dance of forest as postcolonial play - Discrete mathematics lecture notes ppt - When does macduff kill macbeth - Gatorade is classified as a(n) __________ brand. - What money market instrument is used to finance international trade - Dosing syphon wastewater treatment works - Theory of games and economic behavior summary - How are informative reports different from analytical reports - Pursuit Of Happiness - LAW - The dog ate my homework and other tales of woe - Red Tides - Macrowikinomics new solutions for a connected planet pdf - Linear systems unit test part 2 - Cape conran coastal park - Software development plan outline - Describe the distribution of fold mountains - William angliss institute staff - What is word building in english grammar - Which of the following controls leads to standardized behavior and standardized work outputs? - Assessment from head to toe - Sodium nitrate thermal decomposition - Marketing and positioning simulation game - They say i say chapter 6 exercise 1 - Lockheed martin systems engineer - Melville's parable of the walls - Cvp graph - Management Environmental Forces Paper - Arb canopy load rating - Log mean temperature difference for counterflow heat exchanger - Henri claude cosmetics case study - BUSI510 Week 8 Discussion - Olap definitions and rules - WORLD WAR 1 POWER POINT - Reflection paper - Maximizing Efficiency: The Role of Pacific Connect in IPv4 Leasing - Compsis at a crossroads case study - Thread 1 and 2 - Suicide - Chase bank 26th street santa monica - 978 1 259 73278 2 - What the framers couldn t know - Prepare income statements for each year using absorption costing - Cover letter for government official - St john action plan - Benefits of close grip bench press - Book of joshua bible study questions pdf - Cash flow - Who’s at Fault & white Collar Discussion - Significant endeavor essay - Nur 513 introduction to advanced registered nursing - Crisis intervention strategies 8th edition ebook - International product life cycle theory ppt - Importance of flow properties of powders in pharmacy - E gel ex 2 - Respect the interdependence of creation