Categories Menu

Posted by on Oct 29, 2009 | 1 comment

Walter Murch Special: Interviews


The Walter Murch Special ends here, a very interesting journey through the articles of one of the most important men in the history of audio and video creation. November will be a great month for the blog… If you were hoping for game audio articles, you’ll love the November Special!

Finally, let’s read a nice round of interviews of Walter Murch selected form several webs:


I noticed a picture in a recent interview of you in a small studio – is that your personal studio?

That was up in the barn and I was editing what you might call a Directors Cut of “Touch of Evil”, which Orson Wells directed forty years ago and any project that I take on, particularly short term projects I can just do them up in the barn, renting whatever happens to be the available and appropriate technology for that particular film. So what I had there was an Avid, which is a film editing machine, which has up to 24 tracks of sound that run along with it, but you can only actively work on 8 at one time. If you get a track the way you want it, you can make it one of the ‘Sleeper’ tracks, sort of ‘demote’ it to playback only, and then move another track up into the active area, so you can play back 24, but you can only actively work on 8 tracks. Everything just ran through a Mackie Mixer which was also feeding audio from CD’s and cassettes and DAT machines, and a DA88 which is an 8-track recorder which uses High 8 video tape.

How do you feel about working with a computer based system versus something like an analog tape machine?

Well, for me it’s fine. There are some people who claim to be able to tell the difference between professional digital equipment and analog equipment. I can’t. The advantages operationally of using digital are so great, I focus on that and not on what I guess might be the “digital” sound of it. “Touch of Evil” was a film that was done in 1958, so there wasn’t a wide range soundtrack to begin with.

Were you taking the existing sound track and mixing it with some sounds that were recorded now, or…?

Well no, we had separate dialog, and music and sound effects from the original magnetic masters, so we loaded those via DA88s into the Avid, onto the Avid’s hard disk, and I was editing them, supressing the music in some cases, lifting the level of the effects in some cases. All of this was following Orson Welles notes. Where we had to make changes, we simply stole (sound) from various places within the film. The goal was to make something that still sounded like it was all done in 1958 with a minimum of disruption of that particular kind of sound.

The English Patient. What process did you follow for mixing that?

I produced a ‘guide’ track on the Avid, and then that was taken and transferred at a higher quality, onto a Sonic Solutions system at Fantasy Records, and then coming out of the Sonic solutions, after it had been cut, we would make transfers either onto 6 track film, or DA88′s

What we just did on “Touch of Evil” because I was working on the finished soundtrack right from the beginning was to take my Avid sessions and re-create it, opening it up through an OMF (Open Media Framework) file and convert it into ProTools, which is another sound editing situation (Digidesign and Protools are both owned by Avid). That was a real timesaver, in the sense that all of my decisions cutting and fading in and fades out and level setting were maintained when the sound track was opened up in the ProTools environment.

So all they had to do was to tweak what I had done and refine it, because the tools that they have in ProTools are much more precise than what I have on the Avid.

On The English Patient all they really had to rebuild everything that I had done from scratch which was a time consuming process.

Continue reading…



Has film sound led us to hear the world differently?

Yes . . . sure. I hesitate only because Welles was doing the same kind of thing with radio back in the 1930s. Then he continued to innovate when he got into film. If you listen to many of his films, including Touch of Evil, if you don’t watch the picture, you kind of hear the sort of things that he was doing on radio, both with dialogue and sound.

Never before in history, before the invention of recorded sound, had people possessed the ability to manipulate sound the way they’d manipulated color or shapes. We were limited to manipulating sound in music, which is a highly abstract medium. But with recorded material you can manipulate sound effects—the sound of the world—to great effect. In the same way that painting, or looking at paintings, makes you see the world in a different way, listening to interestingly arranged sounds makes you hear differently.

Sound came to film in the late ‘20s, but when it arrived, it anticipated the even later arrival of tape.

That’s very true. Whenever you work in film, you’re working with tape. It just happens to be tape with sprockets on it.

You find things being examined in one discipline; people develop a facility within that area. When they suddenly can expand into another area, there’s a ready-made disposition. They already know how to do it, in a sense.

Touch of Evil recalls themes and approaches developed in your own work—the theme of surveillance, the use of source materials. Was working on it something like gazing into a mirror?

In a way, yes. It wasn’t a film with which I was intimately familiar before I began work on its restoration. I’d seen it a couple of times, but I hadn’t studied it the way some people have, on a frame-by-frame basis. Obviously, when you do a restoration, you really have to get down with the film on a very deep, technical level. But yes, I’d done work on The Conversation, which was all about surveillance, and American Graffiti, which was all about the creative use of source music. Welles had anticipated both of those things in Touch of Evil.

So Welles was less a direct influence than you both followed the logic inherent in recording technologies.

Once you take sound seriously—you think, “How can we use it to the best effect?”—it’s almost inevitable that you’ll start coming to the same conclusions as somebody else who was thinking along the same lines. I’d seen Touch of Evil. Who knows how subconsciously it influenced what I did.

Are there people in film, besides Welles, that you regard as anticipating later accomplishments with magnetic tape?

Certainly Murray Spivak, who was one of the premier and earliest sound editors. He worked on King Kong. You’ll find the most creative use of sound in films like King Kong or in Warner Brothers’ cartoons of the ’30s and ’40s—and Disney to a certain extent. They weren’t limited by reality, and so they recorded interesting, fantastic sounds and, then, arranged and combined them in interesting ways—more so than features. Features were late in developing that sensibility. I grew up on Warner Brothers’ cartoons. When I was five or six, I felt that they were fantastic. They laid down a very rich bed of information that I became aware of only much later.

By all accounts the division of labor at RKO, where King Kong was made, and at Warner Brothers, with Tex Avery, Raymond Scott, and Chuck Jones, wasn’t as strict as elsewhere. Ideas could circulate.

Exactly. Remember that sound alone, just the fact that there was sound at all, was a huge thing in the ’30s—for ten years. We’ve had Dolby sound in theaters for almost double that amount of time. You can imagine the sense of accomplishment in getting any sound at all and, then, to investigate stories with spoken word and a certain amount of sound effects? Plus, it was a corporate world in the sense that there were very few independent motion pictures, and those that there were made had tiny budgets. Sound was expensive; they couldn’t do much inventive work on that level. The push had to come from the director—somebody like Hitchcock or Welles—who said, “I am interested in sound.” Otherwise, the tendency was to do a journeyman-like job and not spend too much money because they’d already jumped over the post so to speak, since there was sound to begin with. The really creative use of sound was something that took time. But there are many exceptions to that rule. Renoir, for example, claimed to be the first person to record a toilet flush and put it in a movie. He strung a microphone from the studio’s sound department to a toilet, flushed the toilet, recorded it, and put it in a film he directed in the very early ’30s.

Taking an example from your own work, when you edited sound on American Graffiti, did you have an entire radio show recorded that you could reference as needed?

Yes. We produced a two-hour radio show with Wolfman Jack as DJ—with commercials, with the songs. George [Lucas] built that show himself. While he was editing the film, he edited the songs, the commercials, and the disk-jockey patter. That is what’s called a “B-track.” It ran alongside the dialogue during the editing of the film.

Continue reading…


Let’s begin with a cliché question. What was your first experience with film that had an influence?

Well, the first thing that struck me forcefully was the invention of the tape recorder and its dissemination as a consumer item, which started to take place in the early ’50s. The father of a friend of mine owned one, so I wound up going over to his house endlessly, playing with this recorder. And that passion, which was a kind of delirious drunkenness with what the tape recorder could do–that it could capture an aspect of reality and instantly play that reality back, and that you could then reorder that reality by transposition, and that you could even do layerings of sound–was just intoxicating, and it occupied nearly the whole first half of my teenage years. So, my entry into the world of film is really through sound rather than image.

The moment that the whole idea of filmmaking hit me was when I was 15 and went to see The Seventh Seal [by Swedish director Ingmar Bergman]. I’d seen lots of movies before that, of course–the average number of films a kid growing up in New York City would see. But The Seventh Seal was the film where I suddenly understood the concept that somebody made this film, and that there was a series of decisions that could have been different if someone else had made the film. I really got a sense of a single person’s interest and passions through watching that film, which in fact was true. This was Ingmar Bergman, after all.

Then I became interested in architecture and oceanography and art history and French literature, and those were the things I mainly pursued as an undergraduate. It was only later on in my college years that I started to get interested and see the actual possibilities of working in film, which was largely through having spent my junior year in Paris in 1963. This was when the New Wave, the Godard&Truffaut style of filmmaking was at its peak. I came back buzzing with the idea of film, and then I found out that there were actually schools that you could go to to study film–graduate schools in film, which I found incredible. I applied to a number of them, and I got a scholarship at USC. Strangely enough, it was only when I got to school that I discovered the fact that films needed sound, and that somebody had to record it, and then you had to “cook it,” in a sense, in post-production. And I saw immediately that this was exactly what I had been doing ten years earlier.

You occupy a rare position of being a film editor and re-recording mixer. How does your involvement with the picture influence the final soundtrack?

Well, it goes very deep with me. I’ve been doing this professionally since The Conversation, which we started shooting in 1972. But I was doing it previously in film school. It’s a combination that appealed to me and appeared to be a natural thing to do at the time, and I’ve now been doing it so long that it seems second nature to me.

An illustration of one aspect of my approach is that when I’m first putting the images together–creating the first assembly of a film–I turn off all the sound, even for dialog scenes. What that does is focus me more intently on the visuals, because I’m reading them the way a deaf person does–I have to extract meaning, greater meaning, out of them because of a sensory deprivation. But also, paradoxically, I pay more attention to the sound because, although I’ve turned the speaker off, I’m still “hearing” sound; it’s just that I’m hearing the sound in my imagination, the way it might finally be. I’m lip reading the dialog, imagining the music, imagining sound effects, I’m imagining all these other things that, were I to turn the bare production track back on, would all disappear, kind of like fairies frightened away by the voice of an ogre. So, at the very first moment that the film is acquiring its shape, it’s already welcoming the influence of the final soundtrack.

Continue Reading…



How do you apply your feel for the film’s rhythm to transitions and dialogue?

Ultimately, I want a dialogue scene between two actors to have the feel of a natural ebb and flow of exchanges of information, threats, love or laughter between two people.

Watch two people talking. As they talk, they reach a point where they’ve made their main point, but still continue. For instance, if I say: “It’s very hot out here today, don’t you think?” The portion “I think it’s very hot out here today” is really the key line. I’m just being polite by adding an extra phrase. You’ll find yourself looking at one person until they’ve made their essential point. Then you’ll find yourself looking at the other person, wondering what their response is.

The first person’s dialogue will overlap into the reaction shot of the second. Then you look at him until he’s had what he has to say and cut back to the first person reacting. So there’s this ebb and flow, the dialogue dances with the issues/31/images. This is a wonderful, often unnoticed-but critical-part of what makes a scene come alive.

In contrast to that would be holding on a person while they talk, then cut to the other person. You’re on them until they’re finished, and so forth. That produces a staccato; I call it “Dragnet Style,” after the old TV series, which used it very effectively.

Like any style, it can be overused and you have to find what’s appropriate. Under normal circumstances, your reaction to what’s being said has a much more fluid feel to it. We try to capture that fluidity in how we manage overlaps with dialogue.

How do you use sound overlaps in transitions?

We use that a lot. Look at how we use sound in The English Patient. Many times the sound for the scene that’s about to happen starts to bleed into the end of the earlier one. You are aware of something happening, but you don’t quite know what is is. Then, when you cut to the second scene, you find out.

It’s like what happens when the alarm clock goes off and you don’t wake up, but incorporate the sound of the alarm clock into your dream. Then you wake up and say “Oh, it’s just the alarm clock.” We used that technique a lot in how we moved from one scene to another.

Continue reading…


The English Patient is a lyrical, non‑linear novel with an elaborate flashback structure. Can you talk about the freedom this allows in adapting it to the screen? Are people less susceptible to holding the visual images accountable to the story as they know it? Is there more room to interpret?

In any film with a flashback structure you do have an extra degree of freedom in the ways the “beads” of the story can be strung together. The connections from scene to scene, particularly the transitions into the past and back to the present, are more allusive than they are in linear material, where one scene seems to trigger the next, like billiard balls colliding.

On the other hand, that freedom exacts a price. The filmmakers must have a strong, intuitive feel for the rightness and the “ripeness” of those transition moments; there are fewer objective criteria for what will work or not – if the transitions feel awkward, premature, or intrusive for whatever reason, the film can quickly become confusing or tedious.

Anthony Minghella, the writer-director of The English Patient thoroughly reworked the time transitions in adapting Michael Ondaatje’s novel. But in editing the film, Anthony and I in turn revised Anthony’s revisions, such that only seven of the screenplay’s original forty transitions made it into the finished film. The other thirty-three were reinvented according to which scenes now found themselves adjacent to each other, and what worked in the language of film rather than on the page. Some things that look great in print fall flat when you see them up on the screen, and vice versa: Things that seem inconsequential on the page sometimes become luminous on the screen.

Can you talk about that process?

Even when you are shooting a film based on a “linear” screenplay, the challenges in structuring the material for the screen are similar to the challenges facing the translator of a text from one language to another. In the case of a film adaptation, though, it is the translation from the language of text into the new language of image and time.

There’s a different weight that a moment (of picture and sound) carries in film, compared to the same moment conveyed by the written word. Everything in film is specific: this person with this color hair, saying these lines in this way, dressed in these clothes, lit by this light slanting at this angle, with these sounds in the background, etc. These details are always on screen. Every time you see a certain character, you are reminded of his haircut, his gait, the color of his eyes. A novelist need mention eye color only once. The mass of all those details, and therefore the amount of processing that your brain is obliged to do, gives a heft to film that text, which is suggestive and allusive, does not command. So you can often “take corners” in text – make sudden leaps and transitions – at speeds that would wreck the film. On the other hand, sometimes the opposite is true: The old saw that “a picture is worth a thousand words” is quite valid under the right circumstances.

In the screenplay of The English Patient, for example, there was a flashback to the desert quite soon after Hana and the Patient arrive at the monastery. It seemed fine in the text of the screenplay, but when we assembled the film it was clear that we needed to stay in the monastery longer before departing into the Patient’s memory – to get our sea legs, so to speak, and familiarize ourselves with this new location and these two people suddenly alone together. But changing the placement of that transition meant that there were consequences down the line. We had to alter subsequent transitions to compensate for delaying the first.

But later there’s a momentary transition back to the Patient during the sandstorm, just a single shot of him, with a dissolve of Katharine’s hand seeming to caress his face. This wasn’t in the screenplay. If you were to try to convey the complex feeling of that image in words alone, it might take more effort than it was worth.

In addition, there was the simple question of length. The first assembly of The English Patient was four and a quarter hours, so more than one third of that material had to be trimmed away to get to the present length of two hours forty minutes. As a result, many scenes were eliminated, bringing the survivors into closer, unintended proximity. This was sometimes serendipitous. When it wasn’t, we had to discover other ways to structure the material, which in turn led to different interpretations, and so on.

Von Clausewitz, when asked to define war, said that “War is diplomacy carried on by other means.” Taking his lead, I would say that film editing is writing carried on by other means.

Continue reading…

1 Comment

  1. “Pretty good post. I just found your site and wanted to say that I have really enjoyed browsing your posts.
    In any case I’ll be subscribing to your blog and I hope you post again soon.”


  1. 科技建筑中不可忽视的因素:声音 | 23Seed - […] 这是一个错失的机会,因为声音的作用极大。加拿大研究人员表示,听音乐比药物更能提高身体的免疫系统,以及减少焦虑。奥斯卡获奖声音剪辑WalterMurch称:“和欣赏画作一样,倾听有趣的经过编排的声音也能让你以不同方式观察世界。” […]

Post a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>