When we say “space”, people generally think of two things: outer space, or a bounded area that something fits into. It’s a safe bet that most people in the sound community immediately think of the latter. So often we focus on the characteristics of a space…how far a sound carries, reflections and reverberation time, etc. Certainly that helps us define a space, but…for the most part…only on a technical level. What really defines a space, is what occupies it. There’s no denying that production designers and location scouts in film, or level designers and artists in games, have a strong role in creating a space, but we in the sonic branch of our respective mediums have the unique ability to refine…or even redefine…those spaces they create. Sometimes, we’re even given the opportunity to create spaces where they cannot. What I want us to consider in light of that, is how we approach the creation of that space.
There’s a difference between creating a “real” space, and creating a “realistic” space. The former gives us the freedom to fully explore a character’s world, while the latter forces us to conform to the mundane world. I’m sure that last sentence gives you a strong impression as to which I prefer. For the purposes of this article, I use the term “real” to indicate a space that is coherent and cohesive within the story, and “realistic” as representative of recreating the real world as we experience it everyday.
The last decade has given us an increasingly rapid expansion of the ability to create realistic space. Convolution and multi-channel reverbs have become widespread and relatively inexpensive, ever cheaper and more compact field recorders make it easier than ever to capture more channels in the field, and the growing penetration of Dolby Atmos and Barco’s Auro 3D is affording us greater and greater spatial resolution in the playback environment. Why is it that the immediate, or at least initial, reaction is to use these tools to capture and recreate the real world as closely as possible?
As an example, at last week’s Immersive Sound Event, I finally got the chance to listen to an Auro 3D playback. I’m not going to spend any time here comparing it to Dolby Atmos, because the thing that struck me the most about the presentation was a quick foray into the idea of recording natively in an 11 channel Auro configuration. I’m a big supporter of multi-channel field recording, but I should qualify that statement. I support capturing sounds in the field using microphones from multiple perspectives…a practice that not only gives you a bit more security that you’ll actually capture a useable recording of that sound, but can also increase workflow efficiency in post. It can be much faster to cut a sound to fit the necessary perspective if it was already recorded that way. Using a recording of a space, on the other hand, requires a group of microphones dedicated to capturing that space…and a specific context to make use of the resulting recording. In the past, I’ve experimented fairly heavily with multi-channel recordings that have a fixed phase relationship (i.e. recording in 5.1 using tools like a Holophone). While handy for ambiences and background needs, I find the practice terribly unwieldy for individual effects when it comes to audio post.
Which brings us back to the Auro 3D demo. A clip from the animated film Turbo was shown, and there’s a moment in the film where a jet passes overhead. The sound was recorded using an 11 channel tree to match the Auro 3D channel layout. In the context of the film, it did sound good; however, even Richard King mentioned the effort that went into the editing required to make use of that particular recording in the scene…and the limited direct use of the raw recording. I noticed something else when the raw recording was played back. There was some odd phasing happening in the front channels as the jet moved through the sound field. This could be caused by any number of things: reflections in the environment during the recording (top level of a parking garage at an airport), slight misalignments of the microphones, the temporary install nature of the playback system in the room, or something else I can’t think of at the moment. [Please note, I’m in no way attacking Auro 3D here…just noting what I heard.] The more channels you try to record in a defined phase relationship, the more chances you have to create unwanted phase issues…which in turn makes a sound that much harder to use in post. There’s a reason I always have a phase correlation meter up when I decode ambiences recorded in M/S. Capturing a sound in a space (mono) can yield wonderfully unique characteristics, capturing the space with it (multi-channel) can be more trouble than it’s worth. In the context of the scene, it worked really well. I think they were able to make the effort that went into the recording pay off, but I can’ help but wonder if the same level of effort, or less, could have turned out similar results in the end.
The obsession with recreating the real world confounds me sometimes. Workflow issues aside, being selective about what fills the space allows for a much more creative approach in supporting the story. One example I that people have probably heard me harp about, ad nauseum, is Cast Away (2000) by Robert Zemeckis. His marching order to Randy Thom was that there should be no sounds of accompanying life on the island with Hanks’ character. This is obviously not a realistic approach: no insects, no birds, etc. Another that I’ve mentioned over and over is Coll Anderson’s work in Martha Marcy May Marlene, where we frequently hear sounds that bridge the transition from one time period to another. We’re experiencing the film world from within the title character’s psyche. In both cases, the space still feels real, but it is a subjective portrait of the characters’ experiences within their worlds. This is where our efforts should lie.
Let’s make sure that as the capabilities of the technology at our disposal expand, we’re not being slaves to the realistic. Certainly there are moments where we’ll need and want to replicate reality, but that should be within the context of creating a space that is real for the characters. Use those convolutions reverbs in situations where they don’t match the space, play with placement and panning in a way that reinforces the character’s perspective, and remember that (most of the time) we have complete control over what sounds are present in the environment. Just because we see that dog barking, doesn’t mean we have to hear it. Where we have license to be subjective, we should. Because by creating a space that is real, we can draw the audience into the story in a way that reality would never allow.
hzandbits says
Good points about chasing reality at all costs. Sound for film (for instance) is so much more interesting, when realism is used sparingly IMO. As someone who likes to go out recording reality now and then, I must say that much of it is quite boring (as sound designers and foley artists surely know). Few films are so hyper realistic in general, that they don’t want to touch us somehow. Art is about emotions. Any film that has an artistic or emotional message (even documentaries) should have a soundtrack which supports that. Realism is not the whole answer there.
Rob says
Bottom line… 99% of people aren’t concerned with this sort of accuracy. 99% of people have never been in a recording studio or even heard the term “convolution reverb” so to think they will even notice this is asinine. To go through trouble making it sound real for the .1% of audiophiles who CAN hear it is even worse, because we all know audiophiles are not going to be going to a local theater to pay money to watch this movie and hear it through that hi-fi system. So this work is done for 1) the 12 people who have 11.1 surround sound systems in their house made up of legendary studio monitors, in rooms specially flattened by a collaborative effort between themselves and Abbey Road while they interned at Auralex, who don’t 2) download it for free from a torrent site, because no one with that sort of room is going to do anything to harm the industry.
It’s all a little ridiculous, IMO…