Separating Your Signal from the Noise: The Intersection Between Bit Depth and Branding
As a sound designer, there are many different thoughts that come to mind when considering a topic such as noise. Everything from using tone generated noise, like white noise in the designing of sound effects, to a technical discussion on different types of dither algorithms, but when I kept thinking about noise, one slightly different viewpoint of the word “noise” kept coming back to mind; like attempting to attenuate something that just won’t go away, this question kept creeping back into the forefront of my mind:
How does a sound designer get their “signal” heard through the ever-increasing amount of “noise” that surrounds us (and our intended audience)?
There are many detractors that can cloud our sonic message, starting with the growing number of media streams/content (most with their own sounds and many times multiple media streams playing back simultaneously), coupled with the loudness of the inherent ambient noise of the busy world around us, which is only heightened by the ability to experience media content in almost any environment imaginable, even while flying (just don’t turn on your mobile device during taxi, take-off, or landing). And then there is one more obstacle to consider: with the influx of produced and distributed media content, many times for free, (a double-sided byproduct of the lowered “cost of entry” into the audio arts, due to DAW’s increasing depth and power, yet simplified user experience and reduced financial investment), how do we even get our sounds to the ears and heard by our intended audience at all?
These questions led me to start to think about a few audio professionals, teams, applications, and content that I believe start to show a path towards getting (or keeping) your signal heard over all of this noise. As loud and clear as possible.
Bit Depth and “The Devil’s in the Details”
A high signal to noise ratio, or dynamic range helps our ability to suppress the interfering frequencies that if heard, will distract our audience from the audio content that is intended to be their focus. Though more directly, bit depth determines how loud something can be captured (or played back) before the introduction of noise (simplified a bit to make a point, the science is a bit more involved), for instance: a 16-bit recording introduces audible noise into the signal at an amplitude of 96dB (-96dB to 0dB), and then with a 24-bit recording, noise only becomes audible at 144dB (-144dB to 0dB). This is an impressive difference, as traditionally, we calibrate our monitors to around 85dB for a film audio mix, and as long as the movie theater projectionist follows instructions in regards to the audio playback settings, the movie’s audio should fall around that range (though often times this can vary +/- 5dB or more depending on each theater and/or projectionist). Sound decibel levels become dangerous right around this level as well, so for extended playback, I cannot imagine we will ever consider pushing past 90dB and expecting our audience to have a pleasurable (and safe) listening experience. This means that presumably 16-bit audio can playback without audible interference comfortably in almost any scenario, though when we go to 24-bit audio, we push this to 144db, and initially, this may seem completely unnecessary, but what this added dynamic range provides for is better clarity in the reproduction of our quietest sounds and retains the nuances and subtlety within our content (such as the natural decay of a sound effect that may be lost below the noise floor at a lower bit depth).
The saying, “the devil’s in the details” holds very true in sound design. Subtlety and nuance is integral to deriving the deepest emotional impact from our audience. For instance, during our analog days, just placing any sound to picture required quite a bit of technical skill (and access to equipment), though now this is relatively easy to accomplish for anyone, with or without a background in the art and science of audio production. This means as professionals and artists, we must continue to produce more nuanced and emotionally engaging content, or our intended sonic message will surely get lost among the “noise”.
One way to add effective, yet nuanced sound to a project is by examining and taking into account the deeper meaning that a scene (or project) is attempting to convey. During a recent Tonebenders podcast, Coll Anderson (credits) spoke about understanding character motivations, personalities, and emotional context of each moment in a film (among other aspects) in order to design the most emotionally impactful sound(s) for the film as a whole. As already mentioned, virtually anyone can place sounds in a timeline and make a gun shot sound playback when a gun is fired (for example) on camera, but a true sound designer must look deeper, and aim to covertly and subtly direct the emotions of the audience. By knowing the context and “raison d’être” of a scene, a sound designer is able to create an audio soundscape that doesn’t just place sounds where they need to be in a film, but has a much deeper impact and can serve to dramatically increase the effectiveness of a scene, and a film (or other form of multimedia) as a whole. This is the type of thinking that can lead to the use of subtle, yet extremely effective and emotionally engaging sound design that will give your signal deeper meaning, and will keep your work in the forefront of the minds of the director and other creative production staff when the next project comes along.
Fully exploring the context of a scene, the motivations and personalities of each character with the director (and other integral creatives on a project) before designing sounds for their actions and environments as Coll Anderson does may only alter our sound design thought process slightly, but in the end, it will be those small sonic details, those nuanced sound elements that subconsciously have the greatest emotional impact on our audience. As sound designers working in visual media formats (such as film, video games, etc), our responsibility is to give that final emotional push that only audio can provide; in essence, to finish the job that the visual elements started and fulfill their emotional potential.
Though understanding a character’s personality (for instance) is vital to producing the depth and emotionally compelling sound(s) for that character’s actions, that information about the project’s story must be coupled with at least a fundamental knowledge of psychoacoustics, or the physiological impact sound has on a listener in order to produce truly effective sound design. Sound and its properties; such as tempo, timbre, articulation, dynamics, etc. all have a substantial impact on a listener’s emotional state and we should be aware of how manipulating these properties will affect our audience. This does not mean that in order to be an effective sound designer, a true sculptor of sonic experiences with emotional depth, we should consciously and constantly think about the timbre or articulation of a sound as we work; this would be counterproductive to the creative process. Though by studying and understanding how sound affects us and allowing that knowledge to seep into our design methodology, along with (on a project by project basis) exploring the characters’ motivations, scene context, and so on, we will innately feel and react during our design process to what a specific project or moment in a project needs audio-wise in order for it to be most effective.
Denoise and Finding Clarity
As mentioned earlier, a high signal to noise ratio helps our ability to remove those interfering frequencies (the noise) that distract our audience from the audio content that we want them to hear. Many times, if we must work with a noisy audio file in a project (such as dialogue we cannot redo through ADR for one reason or another), we will turn to the use of denoising software that allow us to sample a selection of noise, have the application “learn” the noise frequencies, and then use that frequency “map”, our own parameters, and the application’s equalization algorithm(s) to audibly reduce these unwanted frequencies. This is the goal of denoising software such as Waves’ Z-Noise or iZotope’s RX (This hyperlink is to an in-depth review of RX 2 by Varun Nair), to separate the signal from the noise, and then suppress the noise, while attempting to leave the signal we want heard as intact as possible. With both something like Z-Noise and RX, the noise is captured and then used as a threshold that will allow frequencies that are louder than the set noise threshold to pass through virtually unaffected, though if your signal is too similar to the noise (in amplitude, as well as frequency content) in the audio file, it will get attenuated along with the noise.
As sound designers looking to get our audio heard, like denoising applications, we must also find ways to separate our signal from all of the noise that is around us, both metaphorically and also in a more literal, physical sense. Metaphorically, if our creative audio output or design aesthetic is too similar to what people are used to hearing from other sound designers, the sounds you provide will blend in with, or become part of the collective “noise” that people are more and more trained to ignore. It is important to learn from other sound designers and their techniques, but it is even more important to develop our own sonic voice (or style) as a sound designer or the value of our contributions to this art form will always be extremely limited. In a literal sense, these days, we may be called to design audio for playback through mobile devices, casino slot machines, televisions/home stereos, theaters, theme parks, and even airplanes (to name just a few). Each environment and delivery system will inherently have its own noise profile and we should adapt our thinking accordingly whenever possible.
JetBlue hired the creative agency eyeball to produce their in-flight audio branding (including music and sound design). According to eyeball’s brief case study of this project from their website, their first step was to learn about the character and context by analyzing jetBlue’s current customer base:
“To create the audio identity for jetBlue’s entertainment system, we first evaluated the existing customer experience and fashioned a strategic creative brief.”
Before attempting a strategic creative brief, a company’s motivations, personality, and as mentioned above, their existing customer base must be assessed in order to determine the appropriate sonic direction for the brand (or any project). The theme here is the same as with Coll Anderson, and the message is that to design effective and memorable audio experiences, a sound designer must dig deeper and truly get to know their client, the specific project’s motivation(s), and it’s intended audience; only then can we create truly targeted and engaging work.
Though with eyeball’s jetBlue campaign, this was not the most interesting aspect. In the same case study quoted earlier, they go on to mention that they took into account frequency masking during flight due to in-air cabin noise and tuned their sound design to adapt to that specific playback environment:
“One of our more interesting findings was that different frequencies completely disappear during takeoff and flight time due to cabin noise. To ensure the optimum customer experience, we developed custom EQ settings and unique sonic criteria for all audio assets.” – eyeball website
Here in a literal example, the audio team at eyeball set out to separate their signal from the inherent noise of the airplane cabin environment in order for their sonic message to be clearly heard. It is this sort of research and forward thinking that figuratively and literally separates a company or a sound designer apart from the “noise”.
Another Perspective: Embracing the Noise
Advancements in mobile technology have opened up doors to sound designers and mobile application developers to experiment with designing content that incorporates the sounds of the world around us into the user’s listening experience. Since 2008, Reality Jockey, the mobile application developers of RjDj, and Inception the App (among others) continue to explore how to produce and deliver content that embraces the location of the user (or listener) and the inherent and varying noise that comes with each listener’s unique surroundings. In the now retired RjDj application, they experimented with numerous ways of incorporating the sensors within the iPhone (and its earphones) in order to augment the real-time audio playback each time a listener engaged with the application, one of the sensors they focused on was the microphone attached to the iPhone’s earphones. Using PureData (or Pd) as their audio engine, and by having their audience wear their earphones with an attached microphone, they were able to capture real-time audio samples from the listener’s environment, pitch shift them to be in key and harmonious with the rest of the sonic content, and then add that sample into the audio mix to augment the listening experience (along with other methods), creating a unique sonic landscape of sound with each listen.
This concept of adaptive/reactive sonic landscapes within a mobile delivery system (iPhone with RjDj) was of interest to several performing artists and worked very well as a sort of “proof of concept”, but the real “tipping point” came (in my opinion) when it caught the attention of Chris Nolan and Hans Zimmer (and I am sure others on the production team for the film Inception) and they started conceptualizing Inception the App. The dream within a dream context that the film embodied was a perfect vehicle for the release of a mobile application that would audibly simulate these dreamlike states and augment the sonic landscape according to your actions, your environment/location, and the sounds that came from them. When most sonic artists would see the listener’s environmental sounds as interfering with their intended message, Reality Jockey embraced and incorporated these disorganized, random sounds into their (and their users’) experience by adding them in, though now as organized, and not as random additions to the listener’s experience. With over 4 million downloads to date, I believe their signal became heard well above the “noise”. Reality Jockey continues to grow and evolve this concept with each new application they release; up next for them is a “jogging application” developed in conjunction with Imogen Heap that reactively adjusts/modifies/manipulates the audio content according to each user’s specific parameters (environment, motion, etc.).
Oddly, this reminds me of the addition of an adaptive noise reduction option/algorithm in RX 2 Advanced that is able to adjust its parameters according to changes in the noise profile within an audio track/file. As this technology improves, this adaptive noise reduction technology could be a huge time saver when working on audio post production with tight deadlines and a lot of noise (such as reality TV). Both iZotope and Reality Jockey, in their own ways are showing a glimpse of a future in which real-time audio manipulation will bring more intuitive audio applications/systems that will either allow us to attenuate or incorporate “noise” (respectively) in an adaptive, reactive, real-time manner and with sonic results that are unimaginable, even by today’s standards.
This is an exciting time to be a sound designer, and more completely, a media producer of any or several formats, software is cheaper and easier to learn/use, knowledge is widely available throughout the internet, there are more audio delivery systems to produce for than ever before, and some audio related technological limitation seems to be surpassed each day. Though in-turn, this can also be an extremely daunting time as there is more content, more competition, more distractions (both media related and otherwise), and really, just more of everything (except for the need of assistant engineers, which is a shame because it is where many of us learned this trade in a more holistic fashion). Regardless, the information, and resources are out there, from beginner to seasoned professional there is constantly something new to research or a new technique to explore/experiment with and the burden is on each of us to continue honing and evolving our craft through learning, sharing knowledge, and pushing our collective artistic boundaries. From looking at the way Coll Anderson views designing audio that has deeper impact by seeking to find the true “voice” for a film through context and character to the work of eyeball’s audio creatives and their decision to EQ and acoustically adjust their sonic message so that it would be heard as clearly and effectively as possible on board an aircraft, the lessons of how to cut through noise can come from all around us. Sound designers should be looking for these “signals” that pop up from within the “noise”, while also remaining aware and actively listening to the noise that surrounds us and our listeners as well. If we keep our ears open and truly listen to the world around us (and the noise it produces), and also listen to each other and collectively look for new ways and methods to get our sonic message heard, we will keep finding ways to rise above, or even embrace all of this “noise”, and not allow our produced content to get lost within it.