Sadly, the Bruce Tanis special has come to its end. This month was amazing, with a really great amount of sound effects editing master classes given by Bruce. Here are the answers to the questions made by the readers during this month. Hope you like them!
Designing Sound Reader: As a student/future sound editor, I’ve been anxious about the fact that there is no prescribed method to getting one’s foot in the door in the business. For a major studio like Warner Bros., and with many sound folk competing to enter the field, how does one land an internship or entry level position there?
Bruce Tanis: The major Hollywood studios, including Warner Brothers, are union facilities and while there is an apprentice classification, almost no one uses it anymore. The problem here is that you have to be in the union as an assistant or as an editor to work there. The good news is, however, that there are a lot of non-union companies around town and even some union houses outside of the studios which do use runners and interns so at least you can get your foot in the door that way. It has always been a good bit of “who you know” as much as (or more than) “what you know” that gets you a job so it would be a good idea, if you’re in Los Angeles, to go to a few facilities and introduce yourself, staying in contact with them, so that they get a chance to know you in hopes that when an opportunity does come up, they’ll think of you.
DSR: Is there any kind of non-audio related art, literature, or experience that has influenced the way that you sculpt sounds or which sounds to add so the story is enhanced the best way possible?
BT: Actually, literature is a really good source for inspiration because it can paint such detailed pictures in your mind of a particular time or place. The novels of Pat Conroy, James lee Burke, or John Berendt, for example, have beautifully descriptive passages in them that make you think about what sounds are appropriate to the stories they’re writing about. I think it’s a great idea to listen whenever you go someplace new or to an event of some type. In the sense that you can’t successfully break the rules if you don’t know what they are, knowing what something should sound like helps me go beyond that in creating a sound scape.
DSR: Hi Bruce. First of all thanks for sharing all your amazing knowledge this month. I was wondering if you have Pro Tools templates to start a project? How you organize your tracks, sends and busses for your sound editing and design works?
BT: Hello and thank you for your kind words. I have a couple of templates that I can use depending on whether a project will be getting predubbed or not. If it’s a simple TV project and there’s no predub time, I’ll simply cut in a session that is about 14 mono tracks and 20 stereo tracks, give or take. A couple of the mono’s will be dedicated to subwoof material and perhaps four of the stereo tracks will be dedicated for surround information. I’ll cut in direct outputs and render any processing I do directly with the clip as opposed to using sends and inserts for track-based compression, reverbs, or whatever else.
That’s a technique left over from my old dinosaur beginning in that the cutting rooms very often had different plugins than the stage had and, invariably, the session would get to the stage and they wouldn’t be able to access whatever plugin I had used but not rendered. A couple of those phone calls and you tend to remember to make the session as bulletproof as you can! For features that do get predubbed, the template is a little more developed. There are two types here, one for hard effects and one for backgrounds.
The hard effects can be up to 16 different categories such as vehicles, weapons, metal impacts, etc. and each category will have the same number of tracks for each reel so the mixer has some consistency as to what shows up on the console at any given moment in the film. Obviously, not every reel in the film will need all thirty tracks for fire effects like reel four, for example, but if every reel is laid out identically, the mixer doesn’t have to re-establish his bussing for each new reel. All these tracks are bussed category by category out to a separate 5.1 master and routed back in to an aux track which has 5.1 direct outputs for monitoring in my room. If everything is a 5.1 output, as opposed to specified mono outputs, I can pan a clip anywhere I need to on any track. I don’t need to have dedicated lefts or rights, etc. The background template is similar but often only has six categories in it. Otherwise, they output through the same bus architecture as the hard effects.
Typically, for the hard effects I’ll group related elements together on adjacent tracks with the loudest and/or most dynamic sounds on upper tracks, moving down as needed for longer, quieter, less transient elements. The background session is usually in some version of this pattern: “A” predub – Airs, Winds, Roomtones; “B” predub – Traffic; “C” predub – Wallas; “D” predub – Nature ( Crickets, birds, etc.); “E” predub – Fire or Water; “F” predub – Misc. BG’s.
DSR: Hey Bruce, why you don’t like Pro Tools 8? I saw in your interview that you’re still using PT 7. IS there something special you find in this version?
BT: I’m using Protools version 7.4 because I have to coordinate with the other editors and the Supervisor on my show and they are all on 7.4. I’ve used version 8.oh.something for a couple of projects including “Fringe” and “Inception” and, for me, the main issue I have with it is that the colors have all been muted in order to accommodate playbacks on a dub stage. I’m sure someone, at some point, mentioned to Digidesign that: “We always have to turn the monitors off during playback because they’re too bright and distract our client’s attention.” Thus, the screen is now somewhat more muted than in previous versions. Good for the dub stage maybe, but not so good for me since I need to sit there looking at the thing for ten straight hours a day.
There are a couple of other small things that don’t quite appeal to me either such as the drive unmount feature is now an unlabeled button on the upper right side of the screen. It took a while to find the first time! Also, in the tracks window on the far left, tracks can be active while being hidden in version 8. A friend of mine unknowingly deleted some tracks that way and didn’t find out about it until it was too late. One thing that caused us a bit of trouble on “Fringe” is that, if the volume graphing for a clip is on the edge of the clip, Protools will randomly play the last sample of that clip at unity volume. I started making sure all volume information starts well ahead of each clip and carries on well behind each clip to avoid this. It’s an extra editorial step that takes time and on occasion, causes problems in conforming.
DSR: Hi Bruce. As someone such experienced as you’re, what would you recommend for starting a career in sound effects editing by myself? What kind of exercises would you think are the best for practice this? Also, how may I know when I’m ready to find a job in sound?
BT: Hello. The quickest and easiest thing you could do to get started, I think, would be to check out a couple of books on sound recording and editing for film or television. There are several of them available online at sites like Amazon.com. You might want to buy an inexpensive recorder and microphone so that you can start recording things that interest you. You don’t need a whole Protools system ( or Nuendo, or whatever else) to start with, just experiment with recording things so you get a good understanding of how loudly to record things to avoid distortion or how to minimize unwanted noises such as wind.
Probably the best thing of all is just listen wherever you go. Start making mental notes of how loud things are like cars starting or dog barks, how quickly something goes by or the natural reverb created by tall buildings or parking garages. The better understanding you have of how things actually sound, the easier it will be to re-create that in an edit bay. There are lots of sound effects libraries available online, by the way, in case you might like to purchase a cd just to explore what someone else has recorded. There are always places like local radio or TV stations, live stage theaters, or colleges that might need volunteers and that would be a great way to get started. Good luck and don’t give up!
DSR: Mr Bruce, what an amazing month! full of learning…. I was wondering if you set limitations to your workflow. For example don’t use some plugin or sound, a maximum of tracks to use, etc
BT: You’re very kind. Thank you. The only limitations really are usually how many tracks I can create in a session because television projects or feature temp dubs simply don’t allow for as many tracks as I might like to have. They just aren’t able to get through that much material in the time allowed if it’s cut that heavily and spread out that widely. As a freelance editor, I work in whatever room I get assigned to and I don’t have control over what plugins come with that system so it’s always a bit of a guess what I might have to work with. Basically, though, as long as there is a good reverb and a good pitch change plugin, which almost every system has today, I can get along pretty well. Sadly, that way I don’t get a lot of exposure to some of the more obscure or newer ones but I try to experiment with them when I have the chance.
I still haven’t found quite the right architecture yet in terms of cutting things that have multiple characteristics to them such as vehicles or weapons. My sessions are usually laid out as a set of consecutive mono tracks followed by a set of consecutive stereo tracks with the subwoof tracks at the bottom of the mono’s and the surround tracks at the bottom of the stereo’s, although that doesn’t necessarily work well if you have, for example, a gunshot which has mono, stereo, surround, and subwoof elements when those tracks aren’t located next to each other. Because the clips are all labeled appropriately, the mixer can find them easily enough but it still bothers me that my gunshot, just as a random example, might show up on the mixing console on faders 3, 4, 5 (mono elements), 12 (subwoof), 21/22, 23/34 ( stereo elements) and, 35/36 ( surrounds). It works fine but it’s just not elegant.
DSR: I’m going to finish my studies in sound for media. I know everything about technical aspects of sound, recording, etc. I’ve done some projects on sound design and editing, but I would like to practice more and get better skills to look for a job in sound editorial. What do you think are the most important skills I need to apply for an internship/apprenticeship in somewhere?
BT: Going through a course of study in school is a great way to make yourself valuable in an internship program. It gives you a good general background so that you know what someone is talking about and allows you to get up to speed much more quickly once you’ve been hired. Additionally, I think the best thing you can do for an entry level position like that is demonstrate an eagerness to learn and be ready to help out with anything that they may need. As an intern or apprentice, by definition, you’re not expected to know everything about the job since they’ll train you once you start, but demonstrating a set of good personal skills is critical. A good attitude, positive energy, proper attire for your interview, the same things that would make you employable in any setting are important here too.
A really good thing to do is find out a little information about the company you want to work for. Do they produce commercials or music videos? Do they only produce sound for video games? Showing knowledge of and interest in their work right at the start is a very positive thing to do. On the other hand, during a job interview, if they ask you, “If we were to hire you, what do you see yourself doing here?”, and you can’t answer because you don’t know what they do, you look very unprofessional. Good luck and I hope you do well.
DSR: Bruce, being a sound effects editor, how you deal with levels? I’ve been always confused with levels and dynamic range, mostly the sued on tv, etc. What kind of level and dynamic range is needed to deliver a sound editing work? Also, could you tell me about the delivery aspects of your workflow? ie: formated, names, levels, etc… In other words: how you deliver your finished work?
BT: Dub stages and sound editing rooms are typically set up to monitor at 85 decibels for theatrical features and 83 db for television shows although on near field speakers in an edit bay that can be a little too loud to listen to comfortably for a whole day of editing. I usually edit about ten db down from that to avoid ear fatigue, a little quieter still for particularly loud sequences. I’ll audition everything at unity volume to make sure of what the sound is really doing, cut the sequence at a low level and then, when I’m finished editing the scene, I’ll turn it back up to play at unity again just so I know how the sequence really plays. That way I only hear things at a higher level a couple of times and not again and again as I work through the scene. Everything gets balanced against production dialog and gets assembled in what we refer to as a dynamic range of sounds.
For example, gun shots, jets, and other very loud sounds get cut at a higher loudness level and things like room tones, and cloth movement get edited much more quietly in relation to production dialog. For the loudest sounds, I’ll often peak in the red on the Protools meters but since I’m monitoring at 75 db, I still have a bit of head room once the session gets to the dub stage. If the sequence is particularly loud such as a running gun battle or car chase, I can use a compressor or limiter plugin to boost the impact of a sound without distorting, or, I can cut the same exact sound on two or three tracks as opposed to one. Each individual track will still only reach 85 db, but in playing them all back together they sum to just a bit louder.
For delivery to a dub stage, I try to name every clip, a process known as tagging, although most editors don’t do this very much any more because of time constraints. I believe it’s still an important part of your presentation along with head and tail fades, volume graphing, panning, etc. Since schedules are shorter than ever, and most projects get dubbed straight from the Protools session without printing any cue sheets, most people feel it’s unnecessary to tag the clips. I prefer, however, to name them, because I don”t believe it’s proper etiquette to force a mixer who’s already under pressure on the dub stage, to try to figure out which clip needs to be lowered significantly when all he has to go on visually is the name “CD1022-34.1 mono-dupl-1”. If I name the clip “Big Beefy Car Accel”, he has a much better chance of finding the clip and lowering the level successfully when the client asks him to.
You NEVER want the mixer to have to turn around in that situation and say, “I’m still trying to find it!” Otherwise, I’ll deliver a simple Protools session with all audio kept with it so they won’t have to link to anything on the stage. From prior discussions with my Supervisor, I’ll know what session parameters to deliver such as frame rates, bit rates, with or without a video or audio pulldown and so on. Often, the session will be 24 bit, 48k, broadcast wave files, with a video only pulldown but that can vary with every show and every dub stage.
DSR: Hi Bruce,workflow/philosophy question: In a film context: What thought process do you use to determine whether to cut in a sound from a library vs record something fresh – even if an acceptable sound exists in the library that you are using?
BT: The basic deciding factor is budget. If we can get money into the sound budget to go record new material, that’s really always preferable simply because it’s tailored for that film. Almost anything imaginable can be edited out of most professional libraries today, but it really helps to go out and record specific items to keep each show sounding fresh and interesting. Another factor in deciding new vs. library is whether or not a required sound effect is particularly unique to that film. For something like “Red Line”, an super high end exotic car film I worked on, it was critical to go and record all the primary cars used in the film. Certainly, we had recordings of Ferrari’s and Lamborghini’s in the library but these were key parts of the film and we wanted to get recordings that supported the way they were driven on camera. I’m pretty sure NOBODY in town had a recording of a Konnigsegg so we just HAD to go get that one at any rate.
DSR: Hi Bruce, can you tell us some of the “accidents” that happened during some of your field recording sessions and that you used in a show afterwards? Also, what’s the most fascinating sound you ever heard in the field (mechanism, animal, toy…)
BT: Hi Jed. Usually most accidents end up being vehicles or some sort of machinery that pops up unexpectedly in whatever space you’re recording in. Emergency vehicles with sirens going by, helicopters, generators starting up that you didn’t know were there, stuff like that. Once, I was trying to record a long sequence of a garbage truck rolling down the street collecting trash from bins as it came and, after it went by, a nice church bell went off in the distance. The bell’s been in a few different projects over the years. For some reason, two of the more interesting sounds I’ve heard are both military vehicles. Years ago, while living in Reno, Nevada, I went out to the Air Races and wandered out in a field a ways off from the actual airspace the races took place in. What I hadn’t counted on, and probably the reason they tell everyone NOT to go out in surrounding areas, is that for the start of one of the AT-6 races, the planes start from an airborne position and come in about fifty feet above where I was sitting! Eight or ten of these old but still race-worthy World War II fighters went right over me and it was both terrifying and really cool.
The other military moment I mentioned happened a couple of years ago on the Warner Brothers lot In Burbank, California. They were shooting J.J. Abrams’s “Cloverfield” on the back lot near our building and they had three full-on battle-ready National Guard tanks there for the shoot. These things weren’t Hollywood mockups or anything like that. They were extremely serious front-line TANKS. They fired one of them up and our building started to shake. They started the second one up and it got worse. They fired the third one up and the parking lot started to vibrate. Yes. the parking lot was vibrating. The three of them made such a huge rumble we all went out to watch them roll around the backlot and disappear behind the buildings. THAT was fun.
Oh, and one more. I shouldn’t forget there was the thunder from the gates of Hell while waiting at the front entrance to Epcot in Orlando.
DSR: Hi Bruce! Thanks a lot for those really good articles on fx editing. Many sound designers speak very generically about their work and limit themselves to sharing some of the sounds they used so its cool to see some detailed editing workflows and techniques being described.
Anyways, my question was regarding backgrounds. How much do you work perspective and level changes on backgrounds within a scene? Do you try to keep each scene quite constant so it sounds we are in the same location or are you cutting more detailed changes on perspective based on angles and shots?
BT: Thank you very much. I’m glad you like the articles! They were really fun to do. I’m one of the only people I know that believes backgrounds have a very important place in creating moods and believability in a scene. Very often, they’re just tossed in because there just isn’t time in the schedule to seek out interesting backgrounds and edit them properly. Sadly, “Cricket bg #1” is usually the one that gets played because no one has the time to audition “Cricket bg #2”. I like to find different things and use whatever little odd bits are in them because I think that gives them their best use. For example, I’ll try to find corridor backgrounds that have neat doors in them or traffics that contain good horns and sirens and then I’ll cut them up so the good little bits are placed, as much as possible, in the clear so that they might contribute as much as possible. They still play as backgrounds meaning they play at a fairly low level and run head to tail in a scene but now they have little sync moments as well that I think can be really nice.
I’ll change the level on them if the scene moves interior to exterior or vice versa and I will ride the level on them a bit as the scene plays but I typically don’t cut them to camera angle changes. I find that things just get too ping-pongy if you do that. I do play them for overall perspective though, for example, closer to the barn, the crickets are just a bit louder, while closer to the house, they’re a little quieter. They almost always run the entire length of the scene unless we’re following a character as he or she moves from one environment to another. The typical scenario is having someone walk out of their office, down the hall, and into another office.
For this example, I will definitely have one set of backgrounds for the starting location, move through an intermediate set as they pass other offices, and end up with a third set at the last location. One last device that I find helpful is to edit backgrounds in an additive method. By that I mean I’ll start with a master set of tracks for the location in general, perhaps a waterfall on the edge of a lagoon. As we move through different shots, successively closer and closer to the waterfall, I won’t change them out completely for each cut but instead I’ll keep the base tracks going and add a layer each time the camera shows us new information.
DSR: How often do you automate EQ and when and where?
BT: Thank you so much for the compliment. I don’t usually process anything by track-based plugins so I end up using an EQ plugin directly on each clip and rendering it manually each time as needed. The reason being I don’t typically cut that much material from a specific family of recordings that contain a low hum or a high-frequency whine in them across an entire reel ( in contrast to cutting production dialog that has a generator hum in it or something like that), so it doesn’t make sense for me to have something working on that whole track just to correct a couple of clips.
I suppose if I were cutting a series of car effects ( that had EQ issues), that reoccur frequently across a reel that might be a good way to go. Otherwise, I’ll make a determination clip by clip as to whether or not something needs to be EQ’d. Actually, as often as not, I’m not looking to take high or low frequencies out, but to add low frequencies IN. There’s a plugin called MaxxBase that works well but it’s just as easy sometimes to grab a multi-channel EQ plugin and crank up the low-end frequency gain al little.
DSR: How do you build your scenes – background to foreground or vice versa?
BT: Feature films often get divided up between editors and I usually work on sync hard effects but, on the occasions that I end up cutting everything in a reel, I like to start with the backgrounds first and do a complete pass of those simply so that, while that’s getting done, I can let my mind think about what I might want to use for the various hard effects or design elements required. Backgrounds are typically: audition them, download, cut them. Pretty simple thought process. Hard effects and design material are much more open to interpretation and, whereas a shopping mall bg is a shopping mall bg, an alien squid monster can start out as just about anything in the library!
DSR: What do you like to keep as a stereo file and what do you like to keep as a mono file? Do you have guidelines?
BT: I tend to cut most backgrounds in stereo with mono helpers and most hard effects in mono with stereo helpers. Yikes! It really helps out the dialog mixer to have center mono backgrounds to anchor his dialog with in case there are any inherent technical problems or if he needs to marry an ADR line in with production. And on the other side, I find most hard effects play perfectly well as mono sounds although I will definitely sweeten weapons and car interiors, etc. with stereo tracks to give them more impact. If it’s a key gun or maybe a specific interior like a huge cargo plane in flight, I’ll cut those in 5.1, adding in subwoof and surround elements as well. More mundane things like doors (unless it’s a character door or a story point of some kind), I’ll cut in mono.
Also, hard effects often have panning issues associated with them and it’s a little easier to pan them in mono form than as stereo information unless you have a five-channel panner handy.
DSR: What percentage would you say is original recorded material you did and what percentage of you work is from a library?
BT: These days the vast majority of effects comes from library sources. I was lucky enough to get a lot of newly recorded material from John Fasal, who is a seriously talented field recordist, for Yogi Bear but television programs don’t really allow much of that on a weekly schedule. So many projects have been added to various libraries that I have access to that I really just don’t need to go outside of them all that often as a general rule.
DSR: What sample-rate do you record at, edit at and turn your product over to the mixer at?
BT: It depends on what the requirements of both the dub stage and the picture department are but most often the session will be 24 bit, 48k, and broadcast wave.
DSR: Are you editing mostly on speakers or on headphones?
BT: I much prefer to edit using speakers. That’s probably why I’ve never made the switch to cutting dialog. I find that headphones are simply too fatiguing to listen to cars and gunshots through all day long. In the event I end up auditioning non-mastered field recordings from the library, I have to be really careful of something not blasting me if I’m using headphones whereas with speakers, if something does spike up, at least it not all that sound pressure going straight into my ears. And, of course, for editing hard effects, speakers are a more transparent representation of what the material will sound like on a dub stage.