Categories Menu

Posted by on Jul 19, 2013 | 16 comments

Unlimited Dialog “AIRFILL” to Fill your Every Need

Guest contribution by Douglas Murray

I was inspired to finish this write-up after reading the feature list of the new Zynaptiq UNFILTER plugin. Their web site says:

You can also apply the measured filter response from one recording to another – placing the two in the same acoustic  “world”. Or you can create roomtone to fill editing gaps, by applying a measured filter response to noise.

Then I read Shaun Farley’s tweet on the subject and saw that it was quickly followed up by Mike Thornton’s Pro Tools Expert YouTube video: Using Zynaptiq’s UNFILTER Plug-in To Create Room Tone From Pink Noise. I am looking forward to trying UNFILTER for this and its many other promising features. Meanwhile, there is another way to “create room-tone to fill editing gaps” which only requires a convolution reverb plug-in many of us already own.

Read More

Posted by on Jan 31, 2013 | 4 comments

The Tonebenders Share Plugin Secrets

-1

To round out Plugin Use/Abuse Month here on Designing Sound here are some neat tips and tricks from the 3 hosts of the Tonebenders Podcast.

Timothy Muirhead:

Convolution Reverbs are amazing tools to have in your arsenal of plugins. Unlike conventional digital reverbs which simulate theoretical spaces with arbitrary parameters, convolution reverbs use signal processing to deploy the reverberation of a real, existing physical space. In order to make this magic happen you need both the plugin and an impulse response generated and recorded in the field. Without an impulse response (IR) the plugin alone is not very useful. Luckily, most convolution reverb plugins come pre-loaded with a library of IRs that give you a wide range of reverb settings you can put to use.

The IRs used to create those plugin presets are essentially just audio files – created out on location by either playing back a sine wave sweep or by making a loud high-transient noise that covers a large swath of the frequency spectrum – by firing a starter’s pistol for example – and recording the result. The convolution reverb calculates the relationship between the source sound and the resultant reverb created in the space. It can then apply that mathematical relationship to any other sound you patch through the ‘verb. The results are very realistic and natural-sounding reverb treatments. In post-production applications, a convolution reverb can be a life-saver for matching ADR with location audio, and making sound effects feel like they inhabit the same space as the actors. Yet for all the help they offer in making things sound realistic, convolution reverbs can also be employed to make things sound very other-worldly as well.

Every so often I find myself looking for a sound that I can not quite find in my library or even begin to figure out how to tackle. This dilemma is the sound designer’s version of writer’s block. I’ll be banging my head against the wall as my progress grinds to a halt. One strategy I resort to in this situation is to pull up a convolution reverb plugin and try to get creative with the impulse responses I load up. There are some interesting IR’s available; the precise acoustics of the world’s most revered performance spaces are there in your preset menu, alongside IRs recorded in caves, tunnels, and plastic garbage cans. Any space where an IR can be recorded can then be emulated by a convolution reverb.

There are obviously very scientific and rigid rules for those who create these beautiful and realistic reverbs, but the truth is that, since the IR is essentially just a sound file, anyone can go out and record their own IRs and load them into a convolution reverb plugin. And if you’re in search of a new sound altogether, you can throw all the rules right out the window and basically hack the plugin, because any sound can replace an IR. The weirder the better.

Now we are in experimental territory and that elusive sound effect is out there somewhere. The more out-there sounds you try, the wider range of results you’ll get. I’ve come up with a few hints as to what kinds of sounds yield interesting results. Look for things with movement to them like pitch shift or volume modulation. You also want to stick to short sounds. Longer ones can bog down the processors and just generate muddied-up sounds … but of course disregard that advice if you are looking for muddy sounds. Really, anything could work: it’s all about trying new and different ideas.

There are a few practical things you should do in order to get the best results. First up, before you import a sound into your convolution reverb, you will want to prep the sound file a bit. I always bring the sound file into my DAW and edit out any silence at the head and tail. This is important as silence will not be understood by the plugin algorithm as silence. The computer will apply a bunch of complicated math to the nothingness and create zero effect. No point in wasting CPU power like this, so top-and-tail the file extremely tightly. Apply super-short fades to the file ends to eliminate any clicks. Next, you want to make the sound as loud as possible without clipping. (Actually you can get some crazy sounds from IRs that are maxed to clipping, but normally that’s not what you’re going for.) You want to have as much data in the file as possible – because using up all the headroom available will give the algorithm more information to work with. Quiet sounds just don’t create as much of an effect. The easiest way to do this is to normalize each file. I usually normalize to -.05 dB. (Normalize each file individually rather than in a batch, otherwise you will be normalizing everything relative to the loudest of all the sounds highlighted, instead of each sound on its own.)

Once you have the sounds prepped you can import the sounds into the convolution reverb and trick the plugin into believing the sound is an IR. I won’t go into the specifics of this here, as each make of plugin will do this in a different way, with different levels of complexity, but for some it’s as simple as clicking ‘import IR’ in the plugin’s library interface.  Once you have a bunch of sounds imported as IRs it’s time to experiment and play around. Here it’s important to consider both sides of the equation… meaning the sound you use as an IR is as important as the sound on which you are applying the effect. Using a rattling chain as an IR can create a great effect on a drum loop, but it just makes a sloppy mess out of a repetitive sci-fi alarm. So it’s a question of fiddling about with the combination of the IRs you create and the sounds you apply them to.

Inspiration hides in unexpected places. I found a telescoping back scratcher in my Christmas stocking and I quickly realized that when it was fully extended and whipped around, it made a great swish sound cutting through the air. So I set up a mic in the recording booth and recorded swishes until the little back scratcher broke in half. Not what Santa had in mind, but I was happy with the results. The following sounds are a few of those swishes as they were originally recorded, followed by the exact same swishes with the ‘reverb’ from a variety of unconventional IR’s.

Take a listen:

To be honest, some of those were not much to write home about but a bunch of them will be entering my effects library for sure.

Where this trick has been particularly useful to me is in the design of slow motion or time lapse sequences. Going back to my original dilemma, where I’ve hit the wall searching for the right original sound, I’ve deployed some abstract thinking and taken a sound that relates to what is on screen and used that as an impulse response. What I’ve ended up with is a sound design element that works particularly well in sequences with skewed screen time. Using this same logic, I’ve sampled animal sounds or human screams as IRs applied to machines or vehicles, creating some original anthropomorphic effects.

Using sound files as IRs is a bit like spinning a roulette wheel as you never know which pairing of IR and source sound will work well to create that perfect new sound you’re looking for. You simply have to spin the wheel and see what you get. With practice, you’ll start getting a bead on things that will work best, but this is a technique that has been most productive for me when I didn’t know what I was looking for in the first place.

Read More

Posted by on Dec 6, 2012 | 5 comments

Creative Uses of Reverb

Guest Contribution by Ian Palmer

There are a lot of technical articles on Designing Sound so I thought I’d try to balance that with this month’s theme of Reverb. We all know that reverb is used to create realism. Adding the correct or appropriate reverb to ADR will instantly make the dialogue fit better into a scene and remove the artifice of the replacement. However, we can use reverb in a creative way and in a wide variety of techniques. We must remember that what we do with sound always serves the narrative. Here is a small collection of examples in no particular order.

Emotional Effects

I’ll begin with a well known example from Spielberg’s film Schindler’s List (1993). After an argument over a building’s foundations, the camp commander Goeth orders the execution of a Jewish engineer. A guard pulls out his pistol and shoots the woman in the head, instantly killing her. We hear the initial bang of the gunshot very clearly, we are also fairly close to the incident. Immediately after, we hear the gunshot bounce around the hills that surround the camp. Obviously, guns are loud but would a small pistol really create so much echo? I would argue that the echo is at least enhanced and deliberately exaggerated. The reason is that this is a very shocking and emotional moment and the echo exaggerates the shock that the audience will feel. This is a heightened reality where we are focused on a single element of that event through the sound. This link will play a clip of that scene, skip to 2:50 for the execution.

Read More

Posted by on Sep 28, 2010 | 1 comment

Wwise Gets Better: New Convolution Reverb, Effects, Sidechain, Integration Demo, 46 Tutorials on YouTube

YouTube Preview Image

Audiokinectic has released the 2010.2 version of  Wwise, one of the most popular audio middleware solutions for audio implementantion on video games. The new version includes a lot of new things, such as side-chain support, new effects, a new convolution reverb, new integration demo, xWMA support, etc. Here’s the official information

  • Side-Chaining - It can now easily be achieved in Wwise by using the Wwise Meter effect and RTPC curves. The new “Wwise Meter” effect monitors the input signal of a bus and sends the calculated levels (using smoothing parameters such as Attack, Release and Hold) to a game parameter. A Volume RTPC curve linked to this game parameter can then be created on another bus resulting in the volume being affected by the level of the bus with the Meter effect inserted.
  • Wwise Flanger – The full-featured Wwise Flanger effect allows sound designers to create a large scope of effects, including vibrato, chorus, comb filtering and, of course, flanging.
  • Wwise Guitar Distortion - Four distortion models are available (overdrive, metal, fuzz, and clip) along with pre and post filters to create mock-ups for a wide range of classic guitar distortions.
  • Wwise Tremolo - The Wwise Tremolo creates trembling effects by modifying the amplitude of the signal up and down over time.
  • Wwise Meter - The Wwise Meter effect measures the level of a signal without modifying it, and optionally outputs this level as a Game Parameter. The dynamics as well as the range of the output value can be adjusted. It is most useful for achieving side-chaining, where the measured level of a bus drives the volume of another bus through an RTPC.
  • Support xWMA on XBox360 - xWMA uses the WMA professional compression bit-stream format and provides a greater compression ratio than XMA. xWMA is very useful for dialogue and long duration files. A quality setting allows sound designers to vary the bit rate of the compressed sound.
  • New Columns in Voice profiling (Volume – LPF) - The Voices tab of the Advanced Profiler now shows volume and LPF attenuations pre and post positioning.
  • New Integration Demo - Now available for developers: Code and user interface are now fully cross-platform. New examples have been added.
  • Perforce Enhancements – Perforce icons now appear over work units icons in the Project Explorer, the Property Editor and several other views. Information such as Status and Owners are now available in the title bar of the object views. New Status column in File Manager’s commit dialog. Prevent checking out files that are not the latest.



Also, all the official tutorial videos of Wwise are now available on YouTube in HD (mostly of them). There are a total of 46 different videos, including both basic and advanced features.

Audiokinetic’s YouTube Channel

Read More