This article is going to be a little less template, and a little more workflow. We all have our favorite plug-ins. We probably also all have plug-ins we’d love to use, but run into limitations that keep us from pulling them out of the tool box. For instance, I have a couple of plug-ins from Waves that can add some really cool sonic character when I’m designing a sound, but also introduce more noise than I like when I start pushing them too hard. The problem is, I like pushing those plug-ins hard to get that character. Even when not pushing them too hard, I can still hear noise added by the algorithm. I’m not a fan of unwanted noise. So, I recently started experimenting with an old analog technique…
I’ve been a bit remiss in my duty lately, and there are several people who need to be thanked for their contributions on the site. To begin, let me thank two gentlemen who contributed in May:
…And for June:
And finally, the new faces around here:
- John Black
- Cormac Donnelly
- Sam Ejnes
- Johsua Kaplan
- Doron Reizes
- Marie Tueje (who is just joining the site now…Welcome!)
- Everyone else from the community who stepped forward, willing to volunteer their time to the site. We truly appreciate the amazing response the call for help received.
Thank you, all!
Photo by flickr user Carbon Arc, and used under Creative Commons license.
As dynamics month comes to a close, I thought it would be fun to talk about the evolution of film sound mediums and how they impact dynamics. Since the widespread introduction of sync sound to films in the early 20th century, the technologies involved have changed quite dramatically. From experiments with wax cylinders and phonographs to magnetic tape and Dolby Digital, each evolution in sound technology improved fidelity and dynamics, giving sound designers greater power in creating artificial worlds and engaging the audience.
Guest Contribution by: Louis-Xavier Buffoni – Software engineer at Audiokinetic
HDR in a Nutshell
HDR (“High Dynamic Range”) audio is a technique which draws its inspiration from the local adaptation method used in HDR imaging, which “attempts to maintain local contrast, while decreasing global contrast.”  In audio, this local/global dichotomy applies to time, and contrast refers to loudness instead of brightness. The technique consists of using an automatic mixing system that maps virtual world loudness to living room loudness. Clerwall’s phrase “every sound is important, but not at the same time”  summarizes the essence of its algorithm: the mapping is adaptive to what is playing in the virtual world, and can be represented by a “sliding window”, as is illustrated in the following figure.
HDR audio has received a lot of attention since it was presented by DICE a few years ago, backed up by their astoundingly good sounding games Battlefield: Bad Company and Battlefield 3 . It left in many minds the impression that their system had solved the complex problem of mixing in an interactive context.
Guest Contribution by Matthew Marteinsson
At first listen you can tell Mark of the Ninja is a game with a wide dynamic range. The game is all about being hidden and safe or out in the open with the danger of being seen. A very binary visual contrast between being hidden and visible informs the player clearly which state they are in. When hidden, the player has got time to observe what guards are doing and plan your approach. When they’re spotted, it’s a hurried dash to get back to a hiding spot or take out the threat. So how did the audio support having such big contrasts in gameplay?