Dialogue production for a large budget, cinematic video game can often be an intense and often brutally challenging process. Getting an actor in the booth and reading a script is in itself a monumental achievement that requires solid tools, pipelines, and communication.
While there are a great many articles written about the voice actor’s process and performance, there is a dearth of information about the technical process and steps that are taken prior to and after the recording session, and it is these processes, planning, and techniques “behind the scenes” on which this feature will concentrate.
There is a wide spectrum of different approaches to dialogue tools and production process throughout the industry. It is fair to say in fact that almost every developer has a totally different way of working, and there is certainly no rulebook — as long as the job gets done to the desired quality.
However, working on an integrated dialogue database solution from beginning to end of production can speed up process, reduce organisation and administration time both in and out of recording sessions, and remove a whole slew of duplicated work and a mess of multiple scripts from various members of the dialogue production team.
The desire and benefits are clearly there for a tightening up of the production process and integration of dialogue through a single master database. Sadly dialogue is one of the areas that audio directors and audio designers can be less passionate about, and the lack of investment in solid tools, process and pipelines is probably due in some part to this.
Dialogue, it can be argued, is perhaps the single most important aspect of video game audio, in that it is often the only element of the audio that a reviewer will mention, and poorly implemented and badly directed dialogue can completely ruin an entire game.
Dialogue production also has very deep dependencies stemming from within mission design, story architecture, and it’s anchored at the heart of cinematic production dependencies. To this end it needs to be one of the tightest and most organized and “locked-down” elements of audio production, yet remain completely fluid and open to change all the way along the chain.
The Walter Murch Special ends here, a very interesting journey through the articles of one of the most important men in the history of audio and video creation. November will be a great month for the blog… If you were hoping for game audio articles, you’ll love the November Special!
Finally, let’s read a nice round of interviews of Walter Murch selected form several webs:
#1 WALTER MURCH – THE GODFATHER OF MOTION PICTURE SOUND
I noticed a picture in a recent interview of you in a small studio – is that your personal studio?
That was up in the barn and I was editing what you might call a Directors Cut of “Touch of Evil”, which Orson Wells directed forty years ago and any project that I take on, particularly short term projects I can just do them up in the barn, renting whatever happens to be the available and appropriate technology for that particular film. So what I had there was an Avid, which is a film editing machine, which has up to 24 tracks of sound that run along with it, but you can only actively work on 8 at one time. If you get a track the way you want it, you can make it one of the ‘Sleeper’ tracks, sort of ‘demote’ it to playback only, and then move another track up into the active area, so you can play back 24, but you can only actively work on 8 tracks. Everything just ran through a Mackie Mixer which was also feeding audio from CD’s and cassettes and DAT machines, and a DA88 which is an 8-track recorder which uses High 8 video tape.
How do you feel about working with a computer based system versus something like an analog tape machine?
Well, for me it’s fine. There are some people who claim to be able to tell the difference between professional digital equipment and analog equipment. I can’t. The advantages operationally of using digital are so great, I focus on that and not on what I guess might be the “digital” sound of it. “Touch of Evil” was a film that was done in 1958, so there wasn’t a wide range soundtrack to begin with.
Were you taking the existing sound track and mixing it with some sounds that were recorded now, or…?
Well no, we had separate dialog, and music and sound effects from the original magnetic masters, so we loaded those via DA88s into the Avid, onto the Avid’s hard disk, and I was editing them, supressing the music in some cases, lifting the level of the effects in some cases. All of this was following Orson Welles notes. Where we had to make changes, we simply stole (sound) from various places within the film. The goal was to make something that still sounded like it was all done in 1958 with a minimum of disruption of that particular kind of sound.
The English Patient. What process did you follow for mixing that?
I produced a ‘guide’ track on the Avid, and then that was taken and transferred at a higher quality, onto a Sonic Solutions system at Fantasy Records, and then coming out of the Sonic solutions, after it had been cut, we would make transfers either onto 6 track film, or DA88′s
What we just did on “Touch of Evil” because I was working on the finished soundtrack right from the beginning was to take my Avid sessions and re-create it, opening it up through an OMF (Open Media Framework) file and convert it into ProTools, which is another sound editing situation (Digidesign and Protools are both owned by Avid). That was a real timesaver, in the sense that all of my decisions cutting and fading in and fades out and level setting were maintained when the sound track was opened up in the ProTools environment.
So all they had to do was to tweak what I had done and refine it, because the tools that they have in ProTools are much more precise than what I have on the Avid.
On The English Patient all they really had to rebuild everything that I had done from scratch which was a time consuming process.
The (semi/un) technical term is ‘Woosh’, or maybe ‘Whoosh’ if you are American – i’m not so I don’t know, but hey i’ve seen a few files labelled as such, so call them whatever you like but they are those swishy, wooshy movements of air sounds that are often used as transitions, or to create the aural illusion of movement, or to simply compensate for something the director wishes was onscreen & isn’t….
As with most cliches, they exist for a reason – at the best of times they work & evoke much more than mere existence should allow…. and at the worst of times they are lame as the worst metaphor you could possibly come up with… but as with all art, its a matter of taste and as many rich people indicate without even trying, budget does not equal taste – often it is inversely proportional, but as with anything creative it is up to the depth of the creator to reveal whetehr that depth is from the shallow end of the pool or the divers end… In the end this rant is to hopefully provide a few starting points in terms of making wooshes of your own (and thereby avoid commercial librarys of genericized woosh fodder) but to also think & think again about whether a woosh is actually needed… maybe the ultimate woosh is in fact a silent one? The sound of one woosh, wooshing?
anwyay let me just say this: Death to the Woosh/Long live the Woosh!
Woosh revelation 001: Wooshes are directly related to the physical world!
If some geezer comes at your with a lump of wood & swings it, just missing your head, you will hear a woosh. It won’t be some cliche filmic artefact but a real physical phenomena. And at that point, best you run like stink in the opposite direction! But if it wasnt a bit of wood, but instead a samurai sword (in which case dont run, just say your last prayers) or as in the video, a length of vacuum cleaner pipe, the sound will vary accordingly… ie the physical sound of the physical event will vary. In the end, just like the wind we perceive outside, it is the sound of air molecules being cut by a surface. If its a fat, flat surface then it will sound a certain way whereas if its a thin sharp surface cutting the air then it will sound different…. and this leads me to my second conclusion; The best lessons are those learned from reality/nature
So get a length of string or rope, attach various objects to it & swing it around your head & listen (preferably also have a stereo mic recording at this point) Try a cheese grater, a tennis racket, a TV aerial – each will sound uniquely different & should go into the sound FX library labelled as such. I have natural wooshes in my library created by pool cues, golf clubs, vacuum cleaner pipes, cheese graters, tennis rackets, radio & TV aerials, swords of many varieties, sticks, canes, fishing rods, bird cages, kites, wood saws of many varieties & any number of obscure things I can’t even remember & didnt label properly…
Until I get around to writing Wooshes 102, please, go generate a library of your own! Just remember the object is moving air, so use a Rycote & fluffy on the mic… and also that way if you hit the mic with your cheese grater it hopefully won’t destroy it!
Some sound examples and videos with techniques for the use of plugins and processing, such as Waves Doppler and GRM Tools Doppler. You can see all the videos and examples at Music of Sound.
Did you like it? There are more! Tim has just published a compilation of the Best 100 Posts about Sound Design, Recording, Music, Film and more! Some goodies:
Sound Design Tutorials
- SD101: Tuning Instruments & Sound Effects…
- SD101: Shortcuts
- SD101: Backwards Reverb
- I Heart ProTools VCA Faders!
- Intern Applications – Mute FX Clip
- Berlinale Talent Campus – the good, the bad & the absent…
The history of the disaster of the sovietic nuclear-submarine K-19 presented in theaters, with K-19: The Widowmaker, an independent film that cost $100,000,000 to make. It’s about the disaster of the sovietic nuclear-submarine K-19. An interesting film with a lot of work of foley and impressive recording.. Walter Murch worked as re-recording mixer. Let’s read this article on Mix Online, with several interesting notes about the recording and foley sessions:
“It’s very difficult to get a sense of space in an enclosed [environment] like a submarine,” says picture editor and re-recording mixer Walter Murch. “If it’s not lit and art-directed right, everything just sort of blocks up.” [...]
“When I started working on features,” says Walter Murch, “the idea of doing Foley was very exotic and nothing that we could afford. On The Rain People, Francis [Coppola] was shooting on location with the actors, and they were traveling across the country. At the end of the day, he would ask the actors to walk through all of the moves they made without saying anything. On THX [1138, George Lucas' first feature], I would put the Nagra somewhere and walk around duplicating the footsteps in a real space. We did versions of that on The Conversation and American Graffiti.
“On Godfather II,” Murch continues, “we’d figure out the rate at which the principal was walking, and we had a little portable electronic metronome, which we would set at that frame rate. I remember doing the footsteps for Fanucci where he comes up the stairs before he’s killed by DeNiro [young Vito Corleone], and we found the marble staircases in the old Zoetrope building were very much like the staircases in that actual location. So, I set the metronome and I had my assistant at the top. I walked a couple of flights up, so you hear these footsteps coming from far away. They get closer and closer, which is the whole idea of the scene, and then I stopped, as Fanucci stopped, at the top. I said Fanucci’s next line, and when we took the track and sunk it up, all of the footsteps sunk up. On the Foley track, you can hear my voice, and it exactly syncs with the lips of Fanucci.”
Language Design for Sound Designers, an amazing article created by Darren Blondin. I read it last year, but found it again while reviewing my bookmarks. I got the idea of sharing it on Twitter and I noticed that various people didn’t know about that (and they really liked it!), so for those who don’t use Twitter or don’t follow Designing Sound @ Twitter, I decided to make this post and let them know this articlethis article (if you have not yet known). It’s very interesting!
Language sound design is a weird audio craft that few sound designers have gotten the chance to explore. Trying to find documentation on it is rather like figuring out how to win a game of cheese-rolling, the hugely disregarded suicidal sport of Goucestershire. You can pick up a few facts here and there, but as it has not really been acknowledged as a skill you will find few books that illuminate how to do it. Nonetheless, some of the most interesting sound design works have incorporated the use of constructed languages (commonly referred to as conlangs). In this article, I would like to present information on how to invent your own languages from the angle of character sound design. Linguistics is a relatively deep subject but as you will find, you need not get buried far in the principles of language construction to bestow its powers in your sound design work.
Conlangs are languages that are invented by creative linguists most often for use in fictional stories. The Klingon language, designed especially for the Star Trek universe, would be an example of a conlang. Language sound design is a term that I am using to represent sound design work that utilizes a conlang. The most famous examples of language sound design encompass the seemingly endless assortment of non-English speaking alien characters in Star Wars. There is also a grey area in-between. For example, English dialog that is manipulated to the point of sounding non-human, like the voices of the Black Lodge in David Lynch’s Twin Peaks. Audio specialists may or may not have involvement in creating a conlang as these tasks are normally handled by linguists and writers. Be that as it may, a sound designer will approach language invention from a very different angle than a linguist, using microphones, signal processors and high tech audio mixing techniques. Language design by sound designers is a whole different ballpark that deserves serious creative investigation. Having knowledge about conlang design will undoubtedly help you to create outlandish character voices for your projects.
When approaching a sound project, I make sure that my technique is not driven solely by the best examples in entertainment. So, for instance, if I am going to create the sound of a diving airplane, the last place I would begin my research is by listening to plane crashes in blockbuster films. This would not give me a clear understanding of the physical sounds I am aiming to emulate and it certainly would not inspire me to design something original. Therefore, in the case of language sound design, a good place to look for ideas might be plain old linguistics.