Image hot-linked from the documentary’s website. Click on it to visit.
[ed. This article was originally intended to as part of our focus on women in March, but was delayed for a few reasons, but a good article is still a good article...enjoy!]
Guest Contribution by April Tucker
Thanks to everyone who took the time to comment, share, and reach out about my recent contribution, “Women in Audio: Yes, We Exist!” I was floored by the response, and hope we can continue an open dialog about what we can do to accept anyone who wants to work in our field.
Originally, I wasn’t going to write about being a woman in the industry, and I submitted the article below (which we decided to still share this month). While gender equality is a challenge, we all face challenges in our careers. Those challenges don’t just come from what we look like, where we live (or don’t live), age, race, or gender. In fact, sometimes those traits can be an advantage: Being unique or offering a different perspective can be a huge aid in creativity.
Guest Contribution by Scott Kramer
With Haunting Melissa, we sought to bring feature film quality sound to an iOS app. It was an incredible experience, so when Neal Edelstein asked me to join the Hooked Digital Media team for Haunting Melissa: Dark Hearts, I jumped at the chance. Like many of us, my “day job” is designing sound for feature films and television. Hooked is breaking new ground by delivering filmed content via an app, and we thought the sound should be high-quality but also pioneering.
With that in mind, I entered an R&D phase to develop new standards and technologies for this medium. I’m a firm believer in mixing a project in the format and monitoring environment where it will most commonly be heard. We decided to mix these stories on headphones in the hopes that the audience would prefer that experience over the apple device’s built-in speaker, and we’ve found that they often do. Next, I began to research ways to deliver a truly immersive experience using an ordinary pair of headphones.
Let’s start out with what to listen for in a recording location. Naturally, we’re always going to be looking for a space that isn’t going to introduce too many environmental and human generated artifacts into the recording, but the physical layout and acoustic properties of a location can contribute as much character to your recordings as microphone selection…sometimes even more. On top of that, recording vehicles and weaponry (what you’ve specialized in) isn’t something you can do just anywhere. So, what do you listen for when scouting potential recording sites?
The biggest problems I face when searching for a recording location is traffic, especially airports and expressways. I’ve scheduled multiple jobs where I had to find ideal locations away from these environments. Fortunately I live and work in a quieter area away so I don’t have to travel too far. However, that rare Ferrari I need to record is located in the middle of a downtown so it’s crucial to make generous car owner friends who are willing to drive an hour or so to a quieter location. Most microphones I’ve tried are quite sensitive in capturing unwanted background sounds. This is why I often use my Sennheiser MKH-418s M/S shotgun mic. For isolation with a mono mic I use either my Neumann 82i or the Rode NTG8. On bigger budget jobs I will rent the Neumann RSM-191s mic (probably one of the best field recording mics ever made).
Testing out Steinberg Cubase VST Connect and its remote recording capabilities with Veit Renn
The process of working on a collaborative process can be riddled with logistical headaches. Miscommunications, lack of clarity in project goals, last minute revisions, and just bad luck/circumstances can lead any project down a path of frustration. These problems become more apparent when the element of remote/online/virtual collaboration is added, which relies on more indirect forms of communication (e.g. voice mail, email, instant messaging).
My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.