Guest Contribution by Pierce O’Toole
Writer/Director Pierce O’Toole shares his thoughts on music and sound design, and how they play into his creative process.
As a writer and director, my biggest concern on any project is the story. Every project has a story that you are trying to tell. When I approach sound, the lens I view it through – or the speaker I hear it through, I guess – is one of story. While this is true of every element of the filmmaking process, sound is unlike any of the others because it’s the only element that follows me through the entire process.
When I begin writing, music is very important. At first, it’s just something atmospheric or energetic, like The Album Leaf or Daft Punk. As I get further along in the writing process, I get a better sense of the story and the tone. At this point, the music has to match. If it doesn’t, it can make it harder to write. I build playlists that I listen to on repeat. I’ve had several roommates that hate me for this, especially when the playlist is less than ten songs. I don’t ever tire of the music, no matter how many times I listen to it, because that music helps put me in the world of the story. I’m not listening to the music; I’m absorbing it.
Jad Abumrad at PopTech 2010 – Camden, Maine (Kris Krüg/PopTech via Flickr, used under Creative Commons License)
I recently had the chance to chat with Jad Abumrad, creator and co-host of WNYC’s Radiolab. Each episode of Radiolab explores ideas in science, technology, and the universe at large through a seamless blend of expert interviews, sound design, and music. Together with co-host Robert Krulwich, the show has covered topics such as sleep, colors, cities, and loops, just to name a few. Recently, Radiolab has taken to the stage, touring around the United States and adding a visual element to the show’s already imagery-rich storytelling. Jad and I talked about noise, sound’s ability to create powerful mental images, and how all of that translates into a live show.
Designing Sound: I’ll start off by asking you about noise. When I say the word “noise”, what does that make you think? What does it mean to you?
Jad Abumrad: Honestly, the first thing I think is a particular style of experimental music which is loud and abusive and cacophonous and hurtful, but which I very sparingly employ in scoring the show. I’m thinking Merzbow and the whole “musical pain posse” that sort of tumbled out of him. I always like the idea that those stabs and bursts of noise could kind of catch someone off guard, almost like an idea that sort of hits you in the face before you’re ready for it. There’s something about the storytelling we do where I want those ideas to have that kind of impact. So I think about that kind of music.
Guest Contribution by Abigail Sie
Jay Leno (L) with Wayne Pashley (R); photo by John Fasal
The Great Gatsby was an unusually long sound post-production process. How long did the sound mix take?
Just under three months, it was quite a long haul. A lot of that really was logistics in the way of rethinking, re-conforming the pre-dubs and integrating new material with the edit as the picture changed and was updated throughout the final mix period.
Is that because of the visual effects?
A lot of it was due to VFX. Baz (Luhrmann) coined a phrase, which was part of the whole visual experience in 3D, called “Poetic Glue”. The “Poetic Glue” was a way of combining Fitzgerald’s poetry and making it modern. It was the way flashbacks were used, transferring between time zones. Plus there was a lot of written word on the screen, which was directly from the book. So with that came a changing visual experience. The sound effects and design tried to chase the pace and edit and visual design of the film as they were continually updating.
Photo by flickr user Carbon Arc, and used under Creative Commons license.
As dynamics month comes to a close, I thought it would be fun to talk about the evolution of film sound mediums and how they impact dynamics. Since the widespread introduction of sync sound to films in the early 20th century, the technologies involved have changed quite dramatically. From experiments with wax cylinders and phonographs to magnetic tape and Dolby Digital, each evolution in sound technology improved fidelity and dynamics, giving sound designers greater power in creating artificial worlds and engaging the audience.
Guest Contribution by: Louis-Xavier Buffoni – Software engineer at Audiokinetic
HDR in a Nutshell
HDR (“High Dynamic Range”) audio is a technique which draws its inspiration from the local adaptation method used in HDR imaging, which “attempts to maintain local contrast, while decreasing global contrast.”  In audio, this local/global dichotomy applies to time, and contrast refers to loudness instead of brightness. The technique consists of using an automatic mixing system that maps virtual world loudness to living room loudness. Clerwall’s phrase “every sound is important, but not at the same time”  summarizes the essence of its algorithm: the mapping is adaptive to what is playing in the virtual world, and can be represented by a “sliding window”, as is illustrated in the following figure.
HDR audio has received a lot of attention since it was presented by DICE a few years ago, backed up by their astoundingly good sounding games Battlefield: Bad Company and Battlefield 3 . It left in many minds the impression that their system had solved the complex problem of mixing in an interactive context.