Guest Contribution from Steven Smith
In some ways it seems quite strange to find myself authoring a post on synthesis that has as its main topic: “Not everyone needs to be a synthesist”. But from another angle of practicality, it makes a great deal of sense. Many of us already have found ourselves naturally diving into certain areas of synthesis from within the field and somewhat skating around others. So… If you are not a synthesis geek, this article is for you.
‘Why would it be helpful to explore this area?’ you may be wondering. Even though today’s virtual instruments commonly ship with hundreds or even thousands of presets, many users will still find themselves passing over sounds that are not quite right. Yet with some fundamental knowledge and strategies I feel most non-synthesist could quickly address some of these sound’s shortcomings and reshape them close enough to quickly put them in service.
This is precisely my goal. I hope to address some fundamental strategies and principles relating to synthesis and synthesizers in order to facilitate what I like to think of as quick fixes. Even though these strategies will not work 100% of the time, you should find them coming to the rescue quite often.
From the onset it will be my intention to populate this article with images from multiple synths. This is a small attempt to expose you to as many different views as possible. Given that each synth designer has its own GUI strategies (in addition to its own sound design strategies), I hope this will further help the usefulness of the material presented.
There is also a body of knowledge that we must have to enable us to find sounds, change them, and then Save these changes. Let’s jump in…
Exercising listening in a public outdoor space.
Sound designers by nature have an inherent curiosity towards sound. We explore the way sounds work every time we approach a project. With each new opportunity to design a sound, we ask ourselves questions such as: What object/event produced the sound(s)? Where is the sound source located in relation to the listener, and just as importantly, how does (or how will) the sound impact an audience’s emotional state when heard?
It goes without saying that the sheer act of producing our own sonic work, and by critically listening to and dissecting the works of others (as Berrak Nil Boya explored and extrapolated on in her recent post) will inherently make us stronger and better critical listeners. Though along with these practices, it is invaluable to also step away from evaluating completed, produced works and critically listen to some alternate sound sources, and in some potentially new ways; just like exercising a muscle, the more angles you can target your critical listening “muscle”, the stronger and more well-rounded it becomes.
The question then must be, other than by evaluating an already existing game or film’s audio as it was intended, how, and what, can we listen to in order to hone our listening abilities?
This post looks to add to this conversation by offering a few exercises I’ve picked up and augmented over the years and still use to this day. Once again, just like any exercise routine, training your critical listening is an on-going responsibility for any sound designer (though vitally important early in your career, continued practice is essential to maintain a high level of critical listening fitness).
Indiewire has published a guest post by Dolby Institute’s director Glenn Kiser in which he talks to filmmakers about the importance of sound design from the beginning of production.
Making a movie is a never-ending series of compromises, and nothing is as good as the original concept you had in mind. But if you’re really lucky, there’s a moment of alchemy that can happen in the editing room when you put the right piece of music or the right sound effect into the cut. Suddenly something magical happens, and the thing comes to life. You forget about the perfect location you couldn’t secure and the cold your lead actor had on the day you shot the emotional scene. It stops being a maddening litany of disappointments and becomes a movie.
Sound designer and recordist Charles Mayne has written a passionate and inspiring guest post for the A Sound Effect blog. He gives his thoughts on the key notions and ideas that affects sound design, and which have the greatest capacity to produce – in his words – great sound design.
Check out the article here and contribute to the debate.
My first exposure to noise reduction processing was with Waves X-Noise, working clip-by-clip, finding a snippet of noise in the clear, setting the noise profile, then processing the clip before moving to the next one. This offline processing method, while effective, would end up taking a lot of time, especially on long-form projects. Similarly, if you had a processed clip that needed its noise reduction altered, you would have to restore the un-processed version, find the noise print again, re-adjust the parameters, and then re-process it. When time is short (and when isn’t it?), real-time processes begin to look like a much better option. Unfortunately, plugins like X-Noise or iZotope RX Denoiser can’t be used effectively in real-time due to the enormous amounts of processing overhead required and the unmanageable latency added to the signal. With plugins like the new RX 3 Dialog Denoiser and Wave’s WNS and W43, real-time noise processing without expensive hardware is feasible, but it requires a change in workflow to utilize effectively. As I found once I started using the RX 3 Dialog Denoiser, putting one per dialog track was an inefficient use of CPU resources, and simply putting an instance on the main dialog bus proved problematic, especially when dealing with adjacent clips that had drastically different noise profiles.