Guest Contribution by Abigail Sie
Jay Leno (L) with Wayne Pashley (R); photo by John Fasal
The Great Gatsby was an unusually long sound post-production process. How long did the sound mix take?
Just under three months, it was quite a long haul. A lot of that really was logistics in the way of rethinking, re-conforming the pre-dubs and integrating new material with the edit as the picture changed and was updated throughout the final mix period.
Is that because of the visual effects?
A lot of it was due to VFX. Baz (Luhrmann) coined a phrase, which was part of the whole visual experience in 3D, called “Poetic Glue”. The “Poetic Glue” was a way of combining Fitzgerald’s poetry and making it modern. It was the way flashbacks were used, transferring between time zones. Plus there was a lot of written word on the screen, which was directly from the book. So with that came a changing visual experience. The sound effects and design tried to chase the pace and edit and visual design of the film as they were continually updating.
Photo by flickr user Carbon Arc, and used under Creative Commons license.
As dynamics month comes to a close, I thought it would be fun to talk about the evolution of film sound mediums and how they impact dynamics. Since the widespread introduction of sync sound to films in the early 20th century, the technologies involved have changed quite dramatically. From experiments with wax cylinders and phonographs to magnetic tape and Dolby Digital, each evolution in sound technology improved fidelity and dynamics, giving sound designers greater power in creating artificial worlds and engaging the audience.
Guest Contribution by: Louis-Xavier Buffoni – Software engineer at Audiokinetic
HDR in a Nutshell
HDR (“High Dynamic Range”) audio is a technique which draws its inspiration from the local adaptation method used in HDR imaging, which “attempts to maintain local contrast, while decreasing global contrast.”  In audio, this local/global dichotomy applies to time, and contrast refers to loudness instead of brightness. The technique consists of using an automatic mixing system that maps virtual world loudness to living room loudness. Clerwall’s phrase “every sound is important, but not at the same time”  summarizes the essence of its algorithm: the mapping is adaptive to what is playing in the virtual world, and can be represented by a “sliding window”, as is illustrated in the following figure.
HDR audio has received a lot of attention since it was presented by DICE a few years ago, backed up by their astoundingly good sounding games Battlefield: Bad Company and Battlefield 3 . It left in many minds the impression that their system had solved the complex problem of mixing in an interactive context.
Skip Lievsay needs almost no introduction: He is one of the most distinguished and prolific sound editors in the movie business. His many collaborations with The Coen Brothers, Martin Scorsese, Spike Lee and Jonathan Demme, to name just a few, are often considered classics. Lievsay has been nominated for four Academy Awards, two for No Country for Old Men and two for True Grit. He is a New Yorker but has been working in Los Angeles for several years. Recently, he moved back to NYC and talked with Designing Sound at a new Warner-sponsored sound facility on Manhattan.
DS: Skip, thanks for taking time out to do this.
SL: My pleasure.
DS: The theme should be: Music and sound design. And I wanted to start out by asking you about your background. Are you a musician yourself or do you have a musical background?
SL: I started playing in rock bands when I was around 10. And that carried through high school pretty much. And I have some instruments that I play every other day. I started out with the guitar. And in my band they already had a guitar player, so I switched to bass. And like all great sound people I still play bass. I couldn’t say I’m a musician. I’d say I’m a dabbler more than anything. It’s entertaining and I enjoy it, but I don’t… To say I was a musician would defraud people like Terence Blanchard and Miles Davis.
Guest contribution by By Douglas Murray
OK, here is the ugly truth for film post, or really any surround sound work… Most reverb plug-ins do not sound natural for applications using greater than 1 or 2 speakers. What you don’t want: a reverb that jumps to completely different speakers from the source. What you do want: a reverb that spreads out from the sound and helps localize it and define the space it’s in. While I haven’t tried every reverb or surround reverb plug-in for Pro Tools, it’s a very exceptional reverb that sounds localized around the position of the source signal without having to pan the reverb return’s output. The focus of this article is localization of reverb in post for sound effects, dialog, and other discrete sonic events. Localization is of less concern for more enveloping sounds such as ambiences or music, which seem to tolerate more general spatial spreading.
In this article I’ll describe:
- why it is desirable to have the early reflections and reverb bloom outward from the direction of
- the source signal as in nature,
- how these principles must be exaggerated for the theatrical film sound environment,
- how stereo reverbs require panning to work in a multi-channel world,
- how most multi-channel reverb plug-ins largely disregard the direction of the source sound,
- how to simulate reverb localization with existing plug-ins in Pro Tools (more work and less accurate than it should be, today),
- And finally, I will describe a reverb plug-in that does what I want it to do. It seems so simple and obvious! Why is it so rare?