As part of our continuing goal of promoting greater cross-discipline learning with media editing site Art of the Guillotine (Aotg.com), we’d like to bring your attention to their recent article Audio Levels and Metering: Pt. 1. While the article is largely focused at educating editors on good level and metering practices in non-linear editors, there’s some valuable information in the article, and it’s a great refresher on metering approaches even if you’re well versed on the subject. It also features a great side-by-side comparison video of four common meter types. Check it out here!
Guest Contribution by Dennis Foley
The reflections in your control room represents the sound of your room. What sound do you ideally want from your control room? Do you want all the direct sound or straight line sound from your speakers? Do you want to include the sound of the reflections from the room in your mix? If so, how much of the room sound do you want?
Room sound is reflections. Reflections from your room walls, floor, ceiling, and rear wall are all part of the sound of the room. It is present in all rooms and must be managed correctly, if you are to hear all the sounds in your mix. Lets identify the problematic room boundary surfaces that produce these reflections.
Gordon Hempton has a new article up on his Quiet Planet website talking about recording waves.
Find a beach exposed to the open ocean (high-energy) with a large tidal change (higher latitudes) at least several miles from the nearest frequently used road (wilderness) that slopes sharply, so at low tide you encounter diverse substrates (sand, gravel, cobblestones).
Head here to read the full article.
My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.
This article is going to be a little less template, and a little more workflow. We all have our favorite plug-ins. We probably also all have plug-ins we’d love to use, but run into limitations that keep us from pulling them out of the tool box. For instance, I have a couple of plug-ins from Waves that can add some really cool sonic character when I’m designing a sound, but also introduce more noise than I like when I start pushing them too hard. The problem is, I like pushing those plug-ins hard to get that character. Even when not pushing them too hard, I can still hear noise added by the algorithm. I’m not a fan of unwanted noise. So, I recently started experimenting with an old analog technique…