METAMORPH is the latest sample library from Twisted Tools, makers of the designed sample libraries as well as some fun and unique Reaktor ensembles. With sounds designed by BJM Mario Bajardi and Komplex (Iter-Research), METAMORPH “takes heavily processed violins, pianos and acoustic instruments and morphs them into impacts, sci-fi atmospheres, user interface elements and beyond.”
METAMORPH comes as stereo 24-bit, 96kHz BWAV files with full SoundMiner metadata for easy searching. It includes sampler kits for Ableton Live 9′s Sampler and Simpler, Logic 9’s EXS24, and Native Instruments’ Kontakt, Battery, and Maschine; Also induced is the MP16d, Twisted Tools’ sample player. METAMORPH contains just over 2 GB of samples broken down into 10 categories: Drums, Imaging Elements, Micro, Noises, Pass By, Sci-Fi Atmos, SFX, Textures, Tonal, and Composite. The “Micro” category includes User Interface and “Microbot” elements. There’s a good selection of sounds to be had, and the added metadata makes finding things fairly easy.
My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.
Guest Contribution by Pierce O’Toole
Writer/Director Pierce O’Toole shares his thoughts on music and sound design, and how they play into his creative process.
As a writer and director, my biggest concern on any project is the story. Every project has a story that you are trying to tell. When I approach sound, the lens I view it through – or the speaker I hear it through, I guess – is one of story. While this is true of every element of the filmmaking process, sound is unlike any of the others because it’s the only element that follows me through the entire process.
When I begin writing, music is very important. At first, it’s just something atmospheric or energetic, like The Album Leaf or Daft Punk. As I get further along in the writing process, I get a better sense of the story and the tone. At this point, the music has to match. If it doesn’t, it can make it harder to write. I build playlists that I listen to on repeat. I’ve had several roommates that hate me for this, especially when the playlist is less than ten songs. I don’t ever tire of the music, no matter how many times I listen to it, because that music helps put me in the world of the story. I’m not listening to the music; I’m absorbing it.
Finding and removing noise (image display from iZotope’s RX 2 Advanced)
As a sound designer, there are many different thoughts that come to mind when considering a topic such as noise. Everything from using tone generated noise, like white noise in the designing of sound effects, to a technical discussion on different types of dither algorithms, but when I kept thinking about noise, one slightly different viewpoint of the word “noise” kept coming back to mind; like attempting to attenuate something that just won’t go away, this question kept creeping back into the forefront of my mind:
How does a sound designer get their “signal” heard through the ever-increasing amount of “noise” that surrounds us (and our intended audience)?
Jad Abumrad at PopTech 2010 – Camden, Maine (Kris Krüg/PopTech via Flickr, used under Creative Commons License)
I recently had the chance to chat with Jad Abumrad, creator and co-host of WNYC’s Radiolab. Each episode of Radiolab explores ideas in science, technology, and the universe at large through a seamless blend of expert interviews, sound design, and music. Together with co-host Robert Krulwich, the show has covered topics such as sleep, colors, cities, and loops, just to name a few. Recently, Radiolab has taken to the stage, touring around the United States and adding a visual element to the show’s already imagery-rich storytelling. Jad and I talked about noise, sound’s ability to create powerful mental images, and how all of that translates into a live show.
Designing Sound: I’ll start off by asking you about noise. When I say the word “noise”, what does that make you think? What does it mean to you?
Jad Abumrad: Honestly, the first thing I think is a particular style of experimental music which is loud and abusive and cacophonous and hurtful, but which I very sparingly employ in scoring the show. I’m thinking Merzbow and the whole “musical pain posse” that sort of tumbled out of him. I always like the idea that those stabs and bursts of noise could kind of catch someone off guard, almost like an idea that sort of hits you in the face before you’re ready for it. There’s something about the storytelling we do where I want those ideas to have that kind of impact. So I think about that kind of music.