Designing Sound Rearder: What technique (or tip) you wish you had known when you first started doing sound design professionally?
Rodney Gates: I wish I knew how to make something sound large, other than just using reverb tail. One way this can be achieved is by pitching something at multiple intervals – an octave down, two octaves down, and blending with the original. This makes whooshes longer and fatter, and impact sounds beefier. Letting the sounds pitch and change their duration naturally is smoother than keeping their length the same as the original, but the time-correction has it’s uses for keeping heavy sounds short (as long as they are blended a bit with the original, most pitching artifacts are hidden in this process). Also, working with the highest sample rate and bit depth files you can helps a lot with fidelity (24-bit / 96kHz is great, with 192 being even better). The higher sample rates help keep the high-end of the sound as the upper harmonics are brought down during the pitching process, whereas rates of 48kHz and below have their limits, causing the sounds to get darker the further down they are pitched.
DSR: What is your weapon of choice (or method) to create production elements (whoosh, sci-fi sounds, etc)?
RG: I like to use Waves’ Doppler plug-in for creating whoosh effects. However, I wish it handled audio files at a higher sample rate than 48kHz since it’s pitching sounds as it’s core usage.
For electronic sci-fi sounds, adding light MetaFlanger is nice to “tech” something up a bit. For a little low-end emphasis, a Rectified (Pro Tools plug-in) sine wave around 80Hz (or sweeping around that area) is cool to add.
Plug-in automation is your friend, too – it can add a lot of movement to your sounds when using it with plugs like MondoMod or Enigma, etc.
DSR: What file formats do you prefer to have a resume delivered in and what does it take to get you to watch someone’s reel online?
RG: Usually a Quicktime video is fine (stereo or 5.1), whether it’s streaming online or on disc. YouTube at high quality, or Vimeo are decent choices too (if showing video). I try to watch everything that’s sent to me and offer constructive feedback.
DSR: I have a degree in music, and I have mixed a lot of live shows, I have also done voiceover. I really want to get into this industry. What is the best way to get a foot in the door?
RG: Read what I covered in my initial Bio / interview on DesigningSound.org – I go over a lot of it there. Being persistent and keeping your eyes open to what positions become available.
DSR: What has been your favorite project you have worked on thus far?
RG: From a creativity standpoint, it has been “Transformers: War For Cybertron”. I had wanted to work in the sci-fi realm for a while before finally getting to do it.
DSR: Thanks for great articles Rodney. In your “featured sound designer of the month” photo on this site, it seems like you are using a lot of hardware. Can you tell what kind of hardware boxes you use, what you use them for, and which is which on the picture?
RG: That photo was my former office at High Moon. Mackie monitoring all around, on the left a Digidesign PRE, a Presonus ADL600 mic pre were were trying out, and below that an Avalon 2022. Those are Behringer ultracurve pro outboard EQs in the upper-left, and a Mackie HUI controller center. On the right (can’t really see) are Dorrough meters and a Dolby DP564 decoder.
At SOE, I run Pro Tools | HD on an 8-core Mac Pro, with a Euphonix MC Mix controller, through Blue Sky 5.1 system with Dorrough meters. We also use Soundminer Pro for our sound effects database organization. We don’t have a lot of outboard gear due to the fact that we do not have a recording space right now.
DSR: What sort of things do you look for in new talent? Is it also worth learning C++ as well as audio middleware tools?
RG: New talent should use the tools and resources that are available out there – see my “Getting the Gig” article on DesigningSound.org for additional information. There isn’t really a need to learn C++ unless you are heading down the programming route. However, Audio Programming is a specialized niche, and a worthy career choice as they are usually in heavy demand.
DSR: How often do you find yourself needing to use sounds that you didn’t record yourself, or using any synths?
RG: We use commercial library elements every day, though not usually by themselves (they are always edited in some way, or just part of a more-complex sound). Synths can come in handy when going for that kind of synthetic effect (we got a lot of mileage out of Moog Voyager during the production of “Transformers: War For Cybertron”). However, I usually would only use a synth for part of a sound. Say a spell-cast in fantasy land…if it’s a pad sound, they can make nice tails for sounds that are usually comprised of elements that are more organic.
DSR: How often, if ever, do you get useable results from severe pitch or speed (or both) alteration of any recordings?
RG: I do this nearly every day. Pitch is the first thing I usually alter when designing a sound for something. I like to double-up sounds that are pitched an octave or more to make something sound heavier, or pitch it much higher if it’s meant to sound smaller / faster.
DSR: Any tips for using convolution or vocoding, or similar in sound effect design?
RG: I haven’t experimented much with using the convolution process for things other than reverb tails yet, though I’m sure you could get some great results (using Altiverb or other processors). Vocoding is cool but has its limitations – if you’re going for robotic voices, you can do a lot with it.
DSR: As an Audio Director, How do you make sure that your team of sound designers, audio programmers and composers share the same vision regarding the game audio.
RG: Good question. A lot of the time, with sound designers on the team, this can be automatic as we tend to think the same way on a lot of things already. With “Clone Wars”, it was easy to do being a game built on the canon of the Star Wars universe that we are all familiar with. For other projects, I can generally use some terms like “gritty” or “slick sci-fi” to impart the ideas and as we progress into the project, it will become automatic second-nature way of creating the sounds for it.
For composers, the approach needs to be more carefully-structured. They will benefit greatly from concept art, video capture of gameplay, story / lore and descriptions of a game’s pillars from meetings (as they are usually contract and out-of-house). Sometimes a course correction or two in the beginning may be necessary to help set the tone or palette, and then they’re off running.
DSR: Also, What kind of word description (adjectives, verbs or onomatopoeia) approaches do you have for your sound design team, so that everyone is on the same page?
RG: I actually tend to do this a lot with emulation of a sound verbally – especially when discussing the syllables a sound should have and how they play into one another. Usually the sound designer I’m speaking with gets it, with minimal actual word usage needed.
Rishi Dani says
Thanks, Rodney. It was a wonderful month full of great insights on game audio. I am sure I will be coming back quite often to your articles on DS.org . Cheers!
Rodney Gates says
Thank you for reading them Rishi. Glad I could help.
Tom Barker says
Cheers Rodney! Guess I’ll need a mic that goes up to 48k then! :)