Guest Contribution by Dennis Foley
The reflections in your control room represents the sound of your room. What sound do you ideally want from your control room? Do you want all the direct sound or straight line sound from your speakers? Do you want to include the sound of the reflections from the room in your mix? If so, how much of the room sound do you want?
Room sound is reflections. Reflections from your room walls, floor, ceiling, and rear wall are all part of the sound of the room. It is present in all rooms and must be managed correctly, if you are to hear all the sounds in your mix. Lets identify the problematic room boundary surfaces that produce these reflections.
Gordon Hempton has a new article up on his Quiet Planet website talking about recording waves.
Find a beach exposed to the open ocean (high-energy) with a large tidal change (higher latitudes) at least several miles from the nearest frequently used road (wilderness) that slopes sharply, so at low tide you encounter diverse substrates (sand, gravel, cobblestones).
Head here to read the full article.
My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.
This article is going to be a little less template, and a little more workflow. We all have our favorite plug-ins. We probably also all have plug-ins we’d love to use, but run into limitations that keep us from pulling them out of the tool box. For instance, I have a couple of plug-ins from Waves that can add some really cool sonic character when I’m designing a sound, but also introduce more noise than I like when I start pushing them too hard. The problem is, I like pushing those plug-ins hard to get that character. Even when not pushing them too hard, I can still hear noise added by the algorithm. I’m not a fan of unwanted noise. So, I recently started experimenting with an old analog technique…
Guest contribution by Douglas Murray
I was inspired to finish this write-up after reading the feature list of the new Zynaptiq UNFILTER plugin. Their web site says:
You can also apply the measured filter response from one recording to another – placing the two in the same acoustic “world”. Or you can create roomtone to fill editing gaps, by applying a measured filter response to noise.
Then I read Shaun Farley’s tweet on the subject and saw that it was quickly followed up by Mike Thornton’s Pro Tools Expert YouTube video: Using Zynaptiq’s UNFILTER Plug-in To Create Room Tone From Pink Noise. I am looking forward to trying UNFILTER for this and its many other promising features. Meanwhile, there is another way to “create room-tone to fill editing gaps” which only requires a convolution reverb plug-in many of us already own.