Here are answers to the questions you sent to David. Many thanks for the amazing support during this month! It was awesome!
Designing Reader: I have lots of questions. David is a hero of mine. How do you organize your editing sessions? In what way do you find is the best way to organize your tracks – i.e. do you use track naming conventions, track colors, region colors, etc. etc.
DF:
- Track Colors – Only to denote elastic audio tracks. I color all my elastic audio tracks the same color just so I can quickly tell which ones are elastic.
- I begin with a session that has about 5 premixes.
- Each set of premix tracks is bussed to its own 5.1 bus.
- I have a master fader for each 5.1 bus, and I start with this pulled down -6 or so. A lot of unnatural dynamic squashing occurs when tracks get summed/combined out the same bus. But if you have the bus on a master fader and it is lowered so it never clips, you can prevent this.
- Each 5.1 buss also comes up on its own aux track, so I can then SEND that buss to a record track if needed. (Master Faders don’t have sends, so I use Aux tracks here.)
- Each Aux track is output to a master 5.1 summing buss, for the composite 5.1.
DSR: Do you acoustically treat your cutting room?
DF: Yes, mainly to prevent slap reflections, or anything that might cause phasing errors. I’m extremely sensitive to phase problems, so I’m a stickler for speaker distance from my head. However using Waves 360 across my monitoring outputs makes it a lot easier to set up a better monitoring environment. You can delay each channel, and also adjust levels, etc. Just make sure this is on your monitoring outputs ONLY, and that you never record or bounce anything that has gone through the setup or the phase and levels will be all out of whack.
I always find the low end from room to room to be the worst set up, and I always have a hard time trusting low end when I’m working in a new room.
DSR: How much time did you have to rework the Ringwraith scream and the dog snarl sounds in LOTR?
DF: Fortunately, there was time. The wraiths were more stressful because they were a major character rework. The Warg sounds were mostly approved in the attack sequence, so they weren’t a complete overhaul like the wraiths were. I can’t remember for sure about the wraiths, but I’d say there were at least a couple of weeks??? I came down with a nasty flu during that time too and missed a couple of days. I was certainly addressing other things during that time, but it wasn’t an overnight redo. Also, the wraiths evolved over the trilogy. There wasn’t enough material generated in FOTR to carry us through all three films, at least without repeating myself too much. So that was nice. I got to revisit them each time, saving the approach I liked from before and expanding on it with other approaches, and new source. By the time we got to ROTK, almost all of the human elements were my voice. I did re-use wraith screams from FOTR though, for continuity’s sake.
DSR: How do you decide a premix or mix is finished? What is your gauge or method you use to sign off on a project?
DF: I wish there was a better answer for this, but quite often it’s when time has run out. Occasionally, things just flow and I’m content with what we’ve got printed early on. It’s an art form, yes, but there’s a business behind it that allows us to dabble in this art form. We have to keep a level head and weigh both sides and keep things moving.
DSR: What is your method of going about fixes on the dub stage?
DF: Ideally, I’ll go off to my design room and do a fix offline. That way, the mix can keep rolling on other things and not wait on me. I have more of my tools (plug-ins & library, most importantly) available in my design room. Most of the time, the editing station on the stage is just that, an editing station. If I’m being asked to do something interesting, then I generally need more tools than the stock Pro Tools install.
DSR: What determines if you will do field recording as opposed to searching a library?
DF: Generally, I record anytime I don’t have something I need, or if I don’t want to use the same source again. Time and content are obviously a factor too. There are some things in my library that I have so much of, there’s little point in recording more, as the likelihood of getting something new is so slim it’s not worth the effort or time.
DSR: For the Urukhai and the fire Demon of Moria, how did they not end up being mud? How did you create that much bass and low-end without it turning out muddy?
DF: The Uruk-Hai and Balrog were two entirely different approaches design-wise. The Balrog low end was largely due to worldizing in the tunnels in New Zealand. In a sense, it was reverb, albeit a natural real room reverb. We were also given that luxury by where the the scenes took place. Most of the time, it was inside, in the mines of Moria, which lent itself to having large cavernous reverbs. The Uruks were mostly tigers, at least the ferocious fighting elements and tigers have a solid low end that sit alongside an incredibly ragged (in a good way) mid-range.
Honestly, more low end is the note I still get more than any other, and it drives me nuts. Not because it’s difficult to do, quite the contrary – it’s really easy. A subwoofer only reproduces a small range of frequency, and all it takes is to edit in, or generate that using a dbx120, Lo-Air, or Lowender. What drives me crazy is that it’s a “cheap” fix. We can’t get away with booming everything under the sun. So I generally try to get the energy across without the boom track first. You also can’t count on the boom working correctly in every (oh, let’s face it – hardly any) theaters. If we rely on the boom to make our point, we’ll miss making the point in many venues.
DSR: Do you have a special mouse/keyboard setup that you use? QuickKeys? Multi-button input device?
DF: For years now, I’ve been using a Kensington Export Mouse Pro (trackball). It’s funny how some people are trackball people & some mouse people. I’m hands-down a trackball guy. Until earlier this year, I had the Pro Tools Trimmer, grabber, and Selector tools assigned to buttons on the trackball, and it’s super fast to change tools that way. I’ve been forcing myself to use the smart-tool for the past few months. It’s just so frustrating to try to do anything in Pro Tools on the stage, when I’m so used to my trackball & button assignments. I’ve grown accustomed now to the Smart Tool, though it’s not as precise and directed as when I choose specific tools.
And I’m a Quickeys junkie. Some people are clickers and others key punchers, and I’m a key puncher. I can’t imagine working without them. Even if just to call up Audiosuite plug-ins…
DSR: Do you keep a reference mix or Pro Tools session or notes or anything to remind you what elements you used for things? Like the Ents in Lord of the Rings or other awesome effects you have done in the past to remind you for something you may have in the present or future that is similar that you want to have a similar sound for?
DF: Not as much anymore, but in the old days, I was ludicrously anal about saving session copies for every designed sound. I got in the habit after doing a lot of work for Charles Deenen. He’d ask for revisions on a certain sound, and to make sure I COULD address revisions correctly, I’d save a Pro Tools session with the exactly same name as the sound file. At the time though, my entire library was stored as split stereo files, and I was always referencing the files in their original location. For several years now I’ve stopped referencing original, and instead spot sections of file using Soundminer, and making new files. This makes it easier for me to archive shows and be sure I have all the used audio, but I don’t always have every piece for every session available, all the time.
I’ve also converted large portions of my library now to be interleaved. So, even though during Rings I did save off copies of sessions used to make pretty much every file I made, they’re not relinkable now as the audio doesn’t exist in the correct format anymore. About a year ago, I spent a great deal of time, relinking & archiving sessions. I got all the way back to King Kong, and was able to save off copies of sessions with audio. It was disheartening, though, to find I couldn’t do that with most of the LOTR sessions.
DSR: What was the funnest movie you have worked on so far? What was the most difficult movie you have worked on?
DF: I’m going to say Fellowship of the Ring for both of these. I was a huge fan of the books so it was a lot of fun to be involved with that, and especially since all the other areas of the production were done so well. All the pieces are connected, and the direction was good, the cinematography, picture edit, acting, music….. It was all top notch, and that makes us sound better too. It was difficult for the same reasons. There was a lot to live up to.
DSR: Hey Dave, first of all i’m an immense fan…watching the appendices to the LOTR DVDs has very literally been my introduction into sound design and reading about the techniques and ideas used to create the lush, immense sound-design track for the films have sparked my love for sound design. I’ve read how you relished your time at Full Sail in Florida and have been going down the track of the certificate program education (living in Florida, I’m thinking very seriously of Full Sail). I hear a lot of criticism concerning for profit education yet I feel like in an industry such as post production and sound in general, there is much less emphasis placed on your degree/alma mater than your abilities and portfolio. I read countless stories of people that had no formal training and simply learned by “doing it”. How important was your time at Full Sail in learning your craft? Did you ever feel shunned for not having a more formal education (not that Full Sail isn’t “formal”, im just referring to an accredited program versus a non accredited one). Would you, in general, encourage someone to pursue a less formal educational approach to post production trades such as sound design/production from national programs/certificate programs such as full sail and the art institute? Just to clarify, when i say non accredited, i’m referring to 4 year universities that are REGIONALLY accredited as opposed to full sail which is NATIONALLY accredited.
DF: My path included Full Sail. IMO it was a critical piece of the puzzle for me, but everyone has a different path. I tell this to all prospective students. The school provides the tools to learn, the rest is up to the student. If a student takes the courses and forces getting their money’s worth, then they can get a LOT out of it. In the working world, people don’t usually have time to train noobs up. At school, it’s the teacher’s JOB to train you, or at least teach you those things one doesn’t know. I was a super gung-ho student & wouldn’t let things go that I didn’t understand. I really did get my money’s worth, and that’s why I can say it was such an integral part of my path. I’d have been lost without it. The flip side is, it’s a lot more costly now than when I went. One of the dangers with spending that much money is the pitfall that a student might expect the school to do more than it can (or that its purpose is). Even when I went, I saw people that thought the school “owed them a career” after all they paid for the course. That’s a recipe for failure. The student has to take responsibility for what they get out of the school, and for whatever happens next. Having said that though, Full Sail does (or certainly DID when I went) make the tools available.
Shunned for not going to a 4-year school? Never. Schooling is not the issue. Only experience, ability, and job performance matters. Proper schooling pretty much helps you to not fail when you get your shot. But no one is going to hire someone to, say design their film, based on what school they just came from. In my case, I thought I knew a lot when I graduated, and I did know a LOT more than when I started – a LOT more. But when I started interning, it was another world. It was like a relay race and getting the internship was simply passing the baton. If I hadn’t had the schooling, I’d have dropped the baton. But fortunately, I was able to grab the baton & keep running. But make no mistake, there was a LOT of race left…..
DSR: “Yay” for Full Sail! Class of 2005 here. I’d like to hear how your time was there as well. Personally, I’m glad I attended.
DF: Arguably the best year of my life.
DSR: Hi David, I’ve read Erik Aadahl using the Altiverb with SFX as IR, like you mentioned in your previous article (loading a thunder clap). Erik apparently used a metal ratchet on a voice or a glass ding for a voice in the ice cave of “Superman Returns”… I’ve looked into the “Altiverb IR Processor” to import and convert an effect as IR but didn’t succeed. I got some good results with the IR-1 from Waves just doing a drag and drop but I’d rather use the Altiverb. Could you explain a little more in detail about this process? Thanks a lot!
DF: Easy! No need to use the Pre-processer. The good folks at Audio Ease just let you use split stereo SD2 files. Just put a folder in your IR folder, and export split stereo SD2 files to that folder. Youi’ll need one folder for each IR you want to make. Then inside Altiverb re-scan your IR folder. Done.
But be patient. Not many sounds work well as IR’s. Just try a lot of different things & you’ll find some interesting sounds. Watch your monitor levels though!!! A real IR has a very short impact at the head followed by a tail. Most sounds I tried were very loud because the transient part at the head was always too long. Just keep your volumes low while exploring this.
DSR: Hi David! I’ve been a fan of yours ever since watching the behind the scenes footage of the LOTR trilogy. During my studies I worked on a LOTR fan film (www.bornofhope.com) which took me back to the saga and must say it was fun revisiting the films and studying the sound design in the film. Anyways, one of my questions was regarding your use of FX as a sound designer. Do you create for example reverb or delay sends in your sessions to work with and how do those then move over to the dub stage. Are you printing all FX to tracks or does the dubbing mixer take your material and reverbs and translate that using more expensive effects units?
DF: I do not set up sends to my own reverb auxes, for any edit sessions that are going to a mix. Even if the session is being mixed all in Pro Tools, generally the mixer wants to set up their own chain & reverbs and the reverb I do deliver depends on the situation. If I’m comfortable enough that the reverb is correct, I’ll marry it to the sound. However there are plenty of situations where it’s not appropriate to do that. If the original sound needs to be placed anywhere other than pretty hard Left & Right, it’s best to print a 100% wet reverb file alongside the dry original, so both can be placed correctly. If you marry the reverb to the sound, you pretty much tie the mixers hands with what can be done. I’ll give an example. In FOTR, when Pippin knocks the skeleton down the well, we had recorded most of that debris in the tunnels, and lots of pieces at some distance. When that was mixed, it needed to pan around and sound like it was coming from different locations. That became a difficult sell because when the “source” sound got panned, the reverb moved around with it.
DSR: Hello David, Thank you so much for sharing your knowledge and experience. I would really love to learn about the process in which you develop monster/creature vocalizations. How do you connect various sources and elements and make them sound as one. I would also love to know which sound design tools do you like to use for making organic sounds. Thanks.
DF: I think the trick is the same for many types of design and not just creatures. I look for elements that are from entirely different ranges, so they don’t mask each other, but can complement one another. Actually, I try to layer as few things together as I can get away with. This makes the composite more natural, and less likely to sound like multiple things put together. Also, using less elements prevents as much cross-pollination (less shared) of sounds between creatures.
As far as organic goes, I stay away from modulation & synthesis (obviously), but even granular synthesis. The simpler the better for organic. McDSP’s “Analog Channel” has been very good to me, and remains an important part of my processing chain.
DSR What would be the treatment for a foley to make it sound like it’s produced underwater? Would you use any convolution techniques for that? and if “yes”, how would you create that impulse?
DF: Underwater is one of those sounds where films have completely misled the audience. We expect underwater sounds to be very dark, and almost dreamy, whereas actual underwater sounds are much more hyped in the high end than sounds traveling in air. If you’re going for the traditional underwater sound, then the first thing you’ll have to do is roll off a lot of the high end frequencies.
DSR: David I love your work but I´m just wondering about one thing. In the field recording special you said regarding the si-crows: “if someone hears a sound that they’ve heard before, even if they don’t realize it, the wrong sound can take them right out of the movie experience…” Thats what´s happening to me every time I watch LOTR and “The Wilhelm” comes in(or any other movie where this sound is featured). Is this to keep up the tradition or why do you keep on using this sound over and over again…
DF: The Wilhelm is a great example of how a familiar sound can yank the audience right out of the experience. This happens to me every time I hear it. I considered that when we were talking about using it in FOTR. Its use, or rather over-use, had already become borderline “too recognizable”. It was approaching that line, but IMO hadn’t crossed it yet. We wanted to use it more as an homage, rather than considering it the best sound for the job. We knew it could potentially backfire, so we let Peter choose. I think it was who Ethan that had a quicktime video showing most of the incarnations, including Star Wars, Raiders, Toy Story, and others. We showed that to Peter so he’d know exactly what it was and he loved it! He was all for it so it was decided to put it in. If we were building the films today, I don’t think it would even get a consideration. Its use has gotten way out of control.
DSR: I’m taking a year off high school and trying to make the most of it. I’m working at a community radio station, doing my own recordings and doing as much reading as I can. I really feel I’m making progress every day. My question is. What can I do to get ahead? I’ve asked several professionals and semi-professionals, and they all say “Just keep doing it” but, that’s not a very satisfying answer. So I’m going to be really specific on what type of answer I want. I work really hard to get an interesting recording, and I tinker with mastering, but when it comes down to it, putting hours into a recording, doesn’t make me more knowledgeable about things like what’s happening when I use an effect, or what bit rates mean, ect. I take lessons from a man I met when I did my senior thesis on recording in highschool which really helps, and I tried doing recordings every day, but my setup takes so much time to setup and deconstruct that it was taking an hour to do a minute of recording. I don’t really do any thing except stray recordings now and then, and I’m not much of a recording musician which makes recording songs hard. I don’t have very good equipment which severely brings down the quality of my recordings, and I’m not going to attend a college until next fall,so, what do I do in order to keep learning and building skills with what little I have? I have a million different questions, but right now, I feel that this one is the most urgent for me
DF: There’s a lot of info there, and I apologize but I’m not sure what parts are the question. If I’m reading it right, you can use only your existing equipment, which you say isn’t very good? I would recommend getting a handful of the proper tools. Without them it would be like trying to learn to play drums on a banjo, or build a house with a screwdriver and a pair of scissors. The good news is, these days you can get some pretty serious tools for very little money. When I started, you’d want a Synclavier – $100,000, plus video & multitrack equipment – probably another $30,000. Then along came Pro Tools and people could start to work from home for about $20,000. Now for about $500 you can be recording and editing – at least the basics. Interfacing with other people and extra processing, etc., all starts adding to the cost. But you can get started on your own time, at least training yourself, pretty cheaply.
IMO there is no shortcut to experience, which is as the others have said, to just keep doing it. To learn ABOUT bit rates, effects, and all that sort of jargon, a school like Full Sail was just what I needed for that kind of knowledge. I don’t know what the end result is you’re going for (i.e. – where you want to be and do), but one thing you can do if you want to learn Sound Design, or sound for picture, is:1) Pick up a Sony M10 recorder, as well as a windscreen for it at – gigwigwindscreens.com. Recording things won’t get any easier than that.
2) Grab a copy of Pro Tools LE (this is by far the widest used platform – it would be a great benefit to know how to use this). I’d get one of the interfaces that has inputs (i.e. – not an mBox Micro) for flexibility down the road.
3) Pick clips of movies you like, and make your own sound effects to those pictures. Then compare what you’ve done to the film’s released track.Harry Cohen referred to one of my old tricks, to “learning licks off a record”. He was referring to that trick guitarists used which was to listen to a solo over & over again & practice playing it until they can copy it not for note. I used to do the same sort of thing, but I’d sample in sounds I liked from my favorite films, like an explosion. I’d put that sound in sync with what I was working to, then I’d analyze that sound & try to re-recreate it from my own sounds, until I could remove the sound I was copying and no longer miss it. I learned a lot of tricks that way.
DSR: Hi David. Have you ever tried to record a loud impact sound in a recording studio & found the recording to be small & unimpressive compared to an outdoor recording of the same action? If so, do you have any techniques for recording more powerful sounding impacts indoors?
DF: Impacts and loud transient sounds typically need an environmental space for sound to bounce off of. An exaggerated example would be a gunshot in an anechoic chamber. It would just be a short bright loud pop. What makes a gunshot interesting is the trail, and the same with thunder. Of course there are subtleties in gunshots too that differentiate them without the trail, but I’m just trying to exaggerate a point. What makes them most interesting is the way the environment reacts to the initial sound, and we interpret it all as one event, just by association. So where an anechoic chamber would steal the character of a gunshot, so can close micing an impact steal its character. There’s no great trick to puling that off, but I’d make sure not to mic too close. It’s tricky to do that indoors, as you wind up with a boxy/roomy sound pretty quickly. Things also play in a mix a bit better when they’re not so close mic’d.
DSR: Treebeard in a Land Cruiser!?! Who knew? Did you actually choose to use those worldized recordings? I think this was before Izotope RX… how much clean up did you do on your field recordings? Or did you leave them a little dirty for reality’s sake? Thanks so much for sharing!
DF: Yes, we did use those, but they were used more for the animalistic part, and for the wild ents. As far as clean up goes, sometimes I do leave dirty messy things in the sound as taking those out can sometimes strip the life out. This is where working on a linear sequence (film style as opposed to video game on-demand events) has an advantage. In a linear form, sounds will go by only once, so we can get away with some messy elements. In the King Kong extended version, there was an encounter with a Triceratops. Some of the animal sounds I wanted to use were those of a bull at a rodeo, but there was a little cowbell in the recording. This was before Izotope RX so the only way I had to get rid of the cowbell would have been to edit it out, and that would have stripped the vocal element of most of the character I wanted. So I went ahead and tried the scene, cowbell and all. As it turns out, the people that were encountering the Triceratops had a lot of metal items with them like lanterns, etc.. Fortunately, the cowbell sound wasn’t a perfect cowbell, but more just a metallic knocking. WIth all the other mayhem in that scene, the extra bit of metal in the vocal didn’t hurt one bit, and our brains connected it with any number of other things we might have seen on screen.
There’s a video of it here.
Insert “We need more cowbell!” joke here…… ;)
DSR: Hey Dave, just two questions: You mentioned using a schoeps M-S rig out in the field. I was wondering especially with your animal recordings, do you tend to record animal vocals with a cardioid much like a vocal in order to capture more air around the animal or do you zero in with a hypercardioid to cut out unwanted elements like cage rattle, hoove movement, handler noise, etc? Obviously there seem to be pros and cons to either. Which do you prefer?
Also a Synclavier question: If you had another Synclavier or something like it where would you see it being most useful with your current set up? Or have you come so far with the modern technology that you would be hard pressed to find a use for it?
DF: All my Rings animals were recorded as 2 channels of mono. I used a 416 on one channel and one half of a Schoeps CMXY on the other. I wound up using the 416 side in my design, almost every time, not because of the shotgun isolation but because of the aggressive character the 416 has. It has this ragged (nice ragged) midrange that I just love. The CMXY is a great stereo mic, and it’s what you see in that little softball-sized Rycote, in the recording videos and other photos. It’s a great sounding mic, but when recording FX, a lot of times I do want that shotgun side. MS gives you the best of both worlds, and when decoded to XY, it sounds every bit as good as an XY recording to me. Tim Nielsen had a Schoeps MS rig on FOTR, and we did some tests between our two rigs. I was satisfied with how his MS decoded into XY. It wasn’t until after ROTK, though, that the disk based recorders became readily available. I was pressing Sound Devices for a 722 for ROTK, but it was just vapor-ware at that point – nowhere near ready. Man, would it have saved me some time, though. Anyhow, when the 722 came out, I also wanted an extended range mic, so my MS Rig goes to 40khz. Actually, it’s just the shotgun side as I don’t think there’s a figure-eight capsule that is extended. I had such great luck with the 416, I was reluctant to switch to the MS, but the new Mic proved itself on more than one occasion, to the point my 416 is all but retired.
Synclavier? If I could hook one up to my library and have Soundminer export sounds to it like I currently do to Pro Tools, I would definitely be interested. I haven’t touched a Synclav in about 13 years, but there are still times where I wish I had one so I could get back into those performance based moves.
DSR: Cool videos, thanks David! Just a quick question. Why do you prefer polyphonic over varispeed? I usually use varispeed as it more or less emulates tape based time stretching and to me it holds up better. I am just curious as to what pros polyphonic offers you in your work flow. Thanks!
DF: Varispeed changes the pitch as well as speed and that’s not what I’m after. If I was using only the global stretching/contracting for the entire region, then this wouldn’t be AS much of an issue. But when using and moving the warp markers, varispeed jumps to different pitches depending on where the marker gets moved, and I don’t want my sounds jumping around in pitch. The same thing is happening in Polyphonic, but it’s a timing change not a pitch change, so it doesn’t stand out in the same way.
DSR: Was there an auditory reason you wore gloves when recording the hobbit knives?
DF: Sort of…… it hurt my hands to smack the things together without gloves. So wearing gloves just let me hit them harder.
DSR: how often, if at all, do you dabble with synthesis and making sounds strictly from that?
DF: I hack away at synth material every now and then when the need arises. But I mostly just generate a lot of source elements to use when I do. I don’t know much about what I’m doing tweaking the parameters. Well I know what oscillators & filters & such do, but I never really know what tweaking one will do to a particular part of the sound. It’s a lot of trial and error, just fun making noises, and saving the best bits.
Ryan says
Thank you for answering all of my questions, David.
It’s been a great month having you being featured!
Hope to see you around sometime.
– Ryan
Enos Desjardins says
Thanks for all the responses David and wish you all the best in all future endeavours! Thanks.
Charles D says
Awesome articles man!. I have one word for you: “bigger!” :) It’s great as always to see your sound passion shine through in everything!
Haydn Payne says
very interesting! thanks for everything you have shared