Exercising listening in a public outdoor space.
Sound designers by nature have an inherent curiosity towards sound. We explore the way sounds work every time we approach a project. With each new opportunity to design a sound, we ask ourselves questions such as: What object/event produced the sound(s)? Where is the sound source located in relation to the listener, and just as importantly, how does (or how will) the sound impact an audience’s emotional state when heard?
It goes without saying that the sheer act of producing our own sonic work, and by critically listening to and dissecting the works of others (as Berrak Nil Boya explored and extrapolated on in her recent post) will inherently make us stronger and better critical listeners. Though along with these practices, it is invaluable to also step away from evaluating completed, produced works and critically listen to some alternate sound sources, and in some potentially new ways; just like exercising a muscle, the more angles you can target your critical listening “muscle”, the stronger and more well-rounded it becomes.
The question then must be, other than by evaluating an already existing game or film’s audio as it was intended, how, and what, can we listen to in order to hone our listening abilities?
This post looks to add to this conversation by offering a few exercises I’ve picked up and augmented over the years and still use to this day. Once again, just like any exercise routine, training your critical listening is an on-going responsibility for any sound designer (though vitally important early in your career, continued practice is essential to maintain a high level of critical listening fitness).
Back around the time I was first starting out, I remember opening up a demo of Cubase VST (on my trusty PowerMac 6400) and taking a look through the various menus. Everything seemed pretty standard, but something in particular caught my eye, a menu item labeled “Ears Only”. Curious, I clicked on it, only to have my monitor go completely blank. After a few seconds of panic thinking I had broken everything, I realized that Steinberg had programmed a mode that completely disabled the monitor and forced you to just listen. At first, this option seemed like a strange addition. Why, when I’m creating sound, would I not be listening to what I’m doing? Listening while working with audio seemed like a no-brainer. However, after gaining a little more experience, this “just listen” mode began to make a lot more sense.
If you’ve ever wondered what it takes to put together a compelling soundtrack for factual television, Bob Bronow’s interview over at A Sound Effect makes worthy reading. The Emmy Award winning sound designer and re-recording mixer reflects on his workflow creating the sound for reality series The Deadliest Catch, and discusses the particular challenges of working within the factual genre.
Check out Bob’s interview here.
When I saw/heard Gravity last year it set me of on an exploration of dialogue panning to such an extent that I experimented with some fairly extreme panning in the film I was working on at the time. My experiment proved to be, well, inconclusive at best. So I went back to Gravity to see just how the panning worked within the context of the film, then decided to look beyond it and discovered some interesting dialogue panning going on in Cars (2006) and Strange Days (1995) as well.
My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.