My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.Read More
Guest Contribution by Pierce O’Toole
Writer/Director Pierce O’Toole shares his thoughts on music and sound design, and how they play into his creative process.
As a writer and director, my biggest concern on any project is the story. Every project has a story that you are trying to tell. When I approach sound, the lens I view it through – or the speaker I hear it through, I guess – is one of story. While this is true of every element of the filmmaking process, sound is unlike any of the others because it’s the only element that follows me through the entire process.
When I begin writing, music is very important. At first, it’s just something atmospheric or energetic, like The Album Leaf or Daft Punk. As I get further along in the writing process, I get a better sense of the story and the tone. At this point, the music has to match. If it doesn’t, it can make it harder to write. I build playlists that I listen to on repeat. I’ve had several roommates that hate me for this, especially when the playlist is less than ten songs. I don’t ever tire of the music, no matter how many times I listen to it, because that music helps put me in the world of the story. I’m not listening to the music; I’m absorbing it.Read More
I recently had the chance to chat with Jad Abumrad, creator and co-host of WNYC’s Radiolab. Each episode of Radiolab explores ideas in science, technology, and the universe at large through a seamless blend of expert interviews, sound design, and music. Together with co-host Robert Krulwich, the show has covered topics such as sleep, colors, cities, and loops, just to name a few. Recently, Radiolab has taken to the stage, touring around the United States and adding a visual element to the show’s already imagery-rich storytelling. Jad and I talked about noise, sound’s ability to create powerful mental images, and how all of that translates into a live show.
Designing Sound: I’ll start off by asking you about noise. When I say the word “noise”, what does that make you think? What does it mean to you?
Jad Abumrad: Honestly, the first thing I think is a particular style of experimental music which is loud and abusive and cacophonous and hurtful, but which I very sparingly employ in scoring the show. I’m thinking Merzbow and the whole “musical pain posse” that sort of tumbled out of him. I always like the idea that those stabs and bursts of noise could kind of catch someone off guard, almost like an idea that sort of hits you in the face before you’re ready for it. There’s something about the storytelling we do where I want those ideas to have that kind of impact. So I think about that kind of music.Read More
Guest Contribution by Abigail Sie
The Great Gatsby was an unusually long sound post-production process. How long did the sound mix take?
Just under three months, it was quite a long haul. A lot of that really was logistics in the way of rethinking, re-conforming the pre-dubs and integrating new material with the edit as the picture changed and was updated throughout the final mix period.
Is that because of the visual effects?
A lot of it was due to VFX. Baz (Luhrmann) coined a phrase, which was part of the whole visual experience in 3D, called “Poetic Glue”. The “Poetic Glue” was a way of combining Fitzgerald’s poetry and making it modern. It was the way flashbacks were used, transferring between time zones. Plus there was a lot of written word on the screen, which was directly from the book. So with that came a changing visual experience. The sound effects and design tried to chase the pace and edit and visual design of the film as they were continually updating.Read More
As dynamics month comes to a close, I thought it would be fun to talk about the evolution of film sound mediums and how they impact dynamics. Since the widespread introduction of sync sound to films in the early 20th century, the technologies involved have changed quite dramatically. From experiments with wax cylinders and phonographs to magnetic tape and Dolby Digital, each evolution in sound technology improved fidelity and dynamics, giving sound designers greater power in creating artificial worlds and engaging the audience.Read More