Guest Contribution by Scott Kramer
With Haunting Melissa, we sought to bring feature film quality sound to an iOS app. It was an incredible experience, so when Neal Edelstein asked me to join the Hooked Digital Media team for Haunting Melissa: Dark Hearts, I jumped at the chance. Like many of us, my “day job” is designing sound for feature films and television. Hooked is breaking new ground by delivering filmed content via an app, and we thought the sound should be high-quality but also pioneering.
With that in mind, I entered an R&D phase to develop new standards and technologies for this medium. I’m a firm believer in mixing a project in the format and monitoring environment where it will most commonly be heard. We decided to mix these stories on headphones in the hopes that the audience would prefer that experience over the apple device’s built-in speaker, and we’ve found that they often do. Next, I began to research ways to deliver a truly immersive experience using an ordinary pair of headphones.
Let’s start out with what to listen for in a recording location. Naturally, we’re always going to be looking for a space that isn’t going to introduce too many environmental and human generated artifacts into the recording, but the physical layout and acoustic properties of a location can contribute as much character to your recordings as microphone selection…sometimes even more. On top of that, recording vehicles and weaponry (what you’ve specialized in) isn’t something you can do just anywhere. So, what do you listen for when scouting potential recording sites?
The biggest problems I face when searching for a recording location is traffic, especially airports and expressways. I’ve scheduled multiple jobs where I had to find ideal locations away from these environments. Fortunately I live and work in a quieter area away so I don’t have to travel too far. However, that rare Ferrari I need to record is located in the middle of a downtown so it’s crucial to make generous car owner friends who are willing to drive an hour or so to a quieter location. Most microphones I’ve tried are quite sensitive in capturing unwanted background sounds. This is why I often use my Sennheiser MKH-418s M/S shotgun mic. For isolation with a mono mic I use either my Neumann 82i or the Rode NTG8. On bigger budget jobs I will rent the Neumann RSM-191s mic (probably one of the best field recording mics ever made).
Testing out Steinberg Cubase VST Connect and its remote recording capabilities with Veit Renn
The process of working on a collaborative process can be riddled with logistical headaches. Miscommunications, lack of clarity in project goals, last minute revisions, and just bad luck/circumstances can lead any project down a path of frustration. These problems become more apparent when the element of remote/online/virtual collaboration is added, which relies on more indirect forms of communication (e.g. voice mail, email, instant messaging).
My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.
This article is going to be a little less template, and a little more workflow. We all have our favorite plug-ins. We probably also all have plug-ins we’d love to use, but run into limitations that keep us from pulling them out of the tool box. For instance, I have a couple of plug-ins from Waves that can add some really cool sonic character when I’m designing a sound, but also introduce more noise than I like when I start pushing them too hard. The problem is, I like pushing those plug-ins hard to get that character. Even when not pushing them too hard, I can still hear noise added by the algorithm. I’m not a fan of unwanted noise. So, I recently started experimenting with an old analog technique…