January’s featured sound designer Richard Devine has answered all the questions made by the readers on his special. He had been a little busy these days and don’t have much time to answer them. But, here they are! Check:
Designing Sound Reader: First off, you are an inspiration to all that are pushing the boundaries of sound! I’m always eager to read about and listen to your new exploits especially coming from a fellow Southerner. ;) I enjoyed your talk at the Propellerhead LA producers talk in 2005/6.. any chance you will teach or give seminars in the future? My main question for you is: if you could give me advice about starting a career in video game sound design, what would it be? I’m fairly well connected but I wonder if you had any thoughts or insight off the top of your head? Many thanks man!!
Richard Devine: Thank you very much for the kind comments. My advice would be to research a little bit about what kind of sound design you want to do, and whether you want to work in house with a company or work freelance. My experience with sound design for video games has all been freelance licensed music tracks and interface design. I would network as much as possible. I would also recommend reading/researching trade magazines, like Game Developer, Audio Media etc. to gain insight into what other people in the industry are working on, and also take note where they are located. You may have to move or relocate to another city to find better opportunities. I would also create some sort of demo reel that highlights your skills. My last bit of advice is don’t be afraid to try something new and stand apart from the rest of the other sound designers. I think one of the most important aspects is finding your own voice, and finding a signature sound that you can call your own.
DSR: Here is my 20 cents: “How do you approach new compositions and arrangements when you come up with a new track, what are your usual creative processes? Are they the same for your own music and contracted works? First sounds and then composition or the reverse?” and an other question “From the top of your head, what percentage of the sounds in your tracks is recorded and processed or synthesized?” Let’s see when your weird electrocustic ambient experiment comes to light! We want more of your music ;)
RD: It usually depends on the project. For my own musical creations, it really goes all over the place. Sometimes I like to sketch out my ideas on a sheet of paper. Drawing the basic structure of my tracks (intros, breaks, transitions etc). I usually have a general idea of where I want the composition to go, or what mood it will be. The next step is to assemble or create the sounds for the track. I sometimes go out and capture sounds with my portable digital recorder, or create them internally within the computer. I would say that a good 50% of my sounds are acoustically recorded and the other 50% is created digitally. I first decide what type of sounds I want to use in the track, percussion, ambiences, drones, stingers, and granular effects, etc. I try to be conscious of the way each part will work together as a whole. I always start with making the sounds first, then sequencing them in my arrangement later. I find the more interesting the sounds I have to work with the more inspired I will be to do something with them. I usually start out by importing my samples into Apple’s Logic Audio and then prepare them for editing or processing. Once I am finished I create sample kits with Native Instruments “Battery” or “Kontakt” samplers. From there I play around with triggering/sequencing samples and build these elements into a song. In the situation with contracted works I usually work to a brief in which the client requests specific sound assets. With client work each project is totally unique and requires different sounds.
DSR: Hi there! This is from Spain and the question is about the new musical paradigma created thanks to the new technologies. It seems that the music industry have to change or disappear. And looks like that many musician are taking advance on this publishing his music online under free licenses, changing the way they used to have to interact with his listeners. What do you think about this issue, music, Creative commons and new social technologies? Thanks, malaventura.
RD: Hello Malaverntura, the music industry has gone through many changes in the last 10 years. We are living in different times, and I think as an artist its important to try and keep up with the ever changing trends. I see things moving more towards digital downloads/distribution and various music blogs. I believe we are now at a point in where we no longer need the record label or publishing company to get our music out to the masses. Now the artist can pick and choose exactly how they want to be artistically presented. They can choose the exact time of their release, and have total creative control of every aspect of a release. No more waiting on the record label to listen to demos and approve tracks. No more waiting around for weeks or even months for the label to send your final release to mastering & duplication etc. Not to mention all the money in production costs, and packaging. Doing all of these things yourself will save time and money. I think it’s a very exciting time for musicians to be heard who didn’t have a chance to get heard before. It can all be done online from the convenience of your home. You can easily promote your own projects and releases. With social media sites like Twitter, Facebook, MySpace give the artist incredible flexibility in staying connected with your fans, and it also provides you with valuable feedback on the demographics of your market.
DSR: For making your digital sounds and trying to obtain the warm feeling of analog gear, have you ever used a higher end tube amp with 6550 tubes like a mcintosh that was stereo, a stereo dumblator for tube tone enrichment, high end low frequency able speakers, and Neumann or Blue mic to record with?
RD: I have never used the “Mcintosh” or 6550 tubes for tone enrichment. I try to keep my mastering signal chain pretty simple. I use an Apogee Rosetta, TC electronic Finalizer 96K, and RME fireface 800. Most of my mixing for sound effects is done inside the computer with the use of plug-ins (Waves, Soundtoys and Universal Audio). I usually go for a very clear sharp sonic picture with each sound. Most of my sounds are recorded with the Sound Devices 702 recorder. I use two of them out on location and link them together for 4-channels of 24-bit 96K sound. I like switching around my microphone combinations to get as many perspectives as possible. My setup will change depending on what project I am working. Although a few of my favorites are the Neumann RSM-191 A/S, Sanken CSS-5 shotguns. Some other favorites are the DPA-4060 Lavalier’s and the Sennheiser MKH-40, 30, and 60 series.
DSR: I have grappled with making great compositions and then not having the funding or knowledge to make them radio ready. I find that the art of mastering is equally complicated and time consuming as making music. Do you do your own mastering, or do you work with others to get this done. Whichever you do, did you run into the same problem at some point and if so, how did you handle it?
RD: Mastering any type of music can be a very difficult task. I tried in the past to master a few of my earlier releases and found it extremely difficult to get my music to sound consistent on various sized sound systems. The engineering /mastering of tracks is a very fine art; it requires an individual to have lots of experience and access to the right gear. There are so many parameters to think about. The room, monitoring (speakers) compression, etc. all affect the final outcome. I don’t consider myself to be an expert here, and have made lots of mixing mistakes in the past. Although over the years I have learned what not to do. One invaluable resource I always link people to is mastering engineer Rashad Becker from “Dubplates & Mastering”. Rashad mastered my last two albums for Schematic records. Here is a link to a wonderful article posted by Robert Henke’s (Monolake) titled “Mastering, Sound and the Race for Volume”. The interview is here, and provides great details about the mastering process. http://www.monolake.de/interviews/mastering.html
DSR: Hello Richard, I really enjoy your music and equally your sound design is great!! I’d like you share with us some trick on programming Elektrons Gear, mine sound like fart ;) At least can you share with us some sysex dumps? ThankYou.
RD: Hello, yes I actually do plan on giving away a few of my sysex Machine Drum/Monomachine files to Electron later this year for a project they are working on really soon. Stay tuned some fun stuff is in the worksJ
DSR: Richard, I’m a fan of the presets you collaborate for the industry. My question is: do you use presets in your music?
RD: I have sometimes used my own presets for certain projects. Although I am never afraid to say I have used presets before, I think if something sounds right, whether it’s a preset or not, should be what you use for your track/project. I also take into consideration the amount of time someone has to make there own sounds. I generally like to start out with creating my own presets and starting from scratch. I feel that I learn a piece of hardware or software more if I isolate it from everything else in my studio and focus on just making sounds with it. Force yourself to learn what all the parameters do, find all the sweet spots, it’s a great way to learn the fundamentals additive/subtractive synthesis and sound design.
DSR: What do you think about the development of electronic music in general? Do you think we will see a move towards multi-channel music? What implications do you think the availability of software like Max4Live will have on contemporary music? What is, in your opinion, the least utilized form of sound processing, with the most unexplored potential? Thanks!
RD: I feel that we are moving towards a media enriched era for electronic music. Technology will allow us to create much more intriguing musical/visual experiences. Who knows what will happen 20 years from now. Video, audio, and controller interaction…I see all of these technologies converging together. I think we will see surround video and audio become a standard. It will only be a matter of time before technology surpasses our reality. I love all of the implications of software environments like Max4Live and Reaktor. I think it will open a new wormhole of musical ideas and sounds that have never been heard of before.
In regards to the areas of unexplored sound processing although it’s nothing new, is the experimentation of convolution processing technology with software applications like Izotope RX, and Impulse Response utility reverbs. I initially started experimenting with this idea many years ago with Tom Erbe’s Sound Hack. I love the idea of convoluting one sound on to another one. I have been editing IRs like samples, applying effects, including reversing, time-stretching, frequency shifting, etc. Then, importing them into Space Designer/Wave’s IR-1 to see what happens. Sometimes the results are interesting depending what the source material is. Instead of using typical spatial acoustic environments, like halls or specific rooms I have been experimenting with convoluting various different sounds like panning white noise effects, rhythmic effects, circuit bent toys, sand, debris, and granular type sounds. I also love Izotope’s RX in which you can spectrally repair audio sections. I have been playing around with inserting silent gaps in the audio file to create strange artifacts in which RX tries to repair and join the two sections together.
DSR: Richard, If you were to built ultimate, small, quite portable modular what would that be? Which modules would you use? If that’s not possible what are your favorite modules?
RD: This would be a difficult question to answer as there are so many new exciting modules being created. If I only had one case I would at least try to get two oscillators, one multimode filter, envelope, and maybe two different LFO’s as a starting point. Most of the Doepfer stuff is great. I use a lot of the Doepfer filters and LFO’s. Some of my favorites are TipTop’s Z-DSP, Harvestmann Hertz Donut, and Piston Honda, MakeNoise “MATHS” and “QMMG”. I also love the Livewire “AFG” oscillator and Doepfer BBD’s A-188-1 http://www.analoguehaven.com/doepfer/a188-1/
DR: Hi Richard, Thanks so much for your great releases and contributions to the sound design and music world. Your music in particular never has a dull moment, I find I can continually listen and always find new and interesting parts that I previously overlooked. It is very inspirational. Thanks again! I have a few questions for you:
I have read that you have written FFT applications in SuperCollider. What kind of applications were these? What are some of your favorite ideas/experiments to do when working in the frequency domain?
RD: Hello thank you for the kind comments. It’s been many years since I last used Super Collider. From what I recall the last version I used was 2.2.16 on my old OS9 computer. I really like the “UGens” support for FFT based processing. Some favorites that I remember where the Phase Vocoding, Convolution3 (Time based convolver), PV BinShift (Shift and stretch bin position). Most of the processes I would do would mostly be for frequency smearing, shifting of sounds. These are all available for free in the SuperCollider library. My favorite application for processing of Frequency/Time Domain functions is with “Composers’s Desktop Project” (http://www.composersdesktop.com/). This software package is in my opinion one of the most powerful sound processing environments I have ever come across. There are a wide variety of components for sound transformation/frequency domain, including some of my favorites “Spectrum-stretch/compress frequency components of a sound “analysis file”. Stretch Time-stretch/compress duration of an analysis file without changing the pitch. I also loved using the “Blur function” in which it blurs in the time dimension and creates a smoothing of the sound. Very similar to using the blur tool in Photoshop but applying this idea to blurring the frequencies of sound. There are some other tools I really like Michael Norris’s Sound Magic Spectral plug-ins for Mac OSX . Here is the link, they are available as audio units, and best of al they are free: (http://www.michaelnorris.info/soundmagicspectral/index.html)
One final app to take note that is also free is “SPEAR” (http://www.klingbeil.com/spear/) is an application for audio analysis, editing and synthesis. I have gotten lots of interesting results with this application.
DSR: When programming in an environment like CSound/SuperCollider/MaxMSP, do you tend to start with a basic idea and just experiment? Or do you have a very clear idea of what you want to achieve from the beginning?
RD: Most my creations now are all created within Max/MSP/Max For Live are simple sound processing devices/sequencers. I would most of the time take a simple patch then modify the patch, or start out with something very simple like a ring modulator, and then add more components to the signal chain. I usually have a clear idea of what I want to do and then build it. Now with Max For Live I can test patches and ideas I have and build them right into my DAW environment.
How on earth do you manage your sample collection and VSTs? Do you tend to forget about large parts of it and just continually create new sounds?
RD: It’s easy to forget about all the plug-ins and sample libraries I have on my system. I try to keep everything organized, and work from saved templates that load in specific plug-ins that are saved on my channel strips. I find it really handy to think of grouping my VST’s into Native Instruments Kore or Logic Channel strip presets. I can recall a group of plug-ins rather then hunting around for the right instrument or effect.
DSR: Do you ever struggle with patience during the creative process? If so, how often?
RD: I find working on certain projects demands lots of time and patience. I struggle with this problem all the time. Sometimes you will run into creative blocks when nothing is working. Or run into technical issues, (computers/hardware/software problems). In these situations for me the best thing is to just walk away from and take a short break.
DSR: I find the way I interact with software/hardware tends to determine a lot of characteristics of the output. This makes me always want to interact with new interfaces, or rethink the way I currently interact. Have you noticed this? What are some of your favorite hardware or software interfaces for creating sound?
RD: Yes, absolutely, the way you interact with software/hardware definitely determines the characteristics of the output. In terms of hardware I really have been digging the Jazzmutant Lemur multi touch controller. The Lemur multi-touch technology makes it possible to use all ten fingertips to accurately control multiple user interface objects simultaneously. With the option to customize each object, adjust the dimension, shape, color, appearance, status, and behavior. It’s become one of my favorite interfaces for live performance, sound creation, and music production. In terms of software I like Apple’s Logic Audio, and a host of applications including: GRM tools, MetaSynth, Bidule, Native Instruments Reaktor, and Absynth. Lately I have been having a lot of fun with Ableton’s Max for Live.
DSR: Could you recommend any particular books or resources on digital signal processing?
RD: Sure, one of the first books I purchased was “The Computer Music Tutorial” by Curtis Roads.
Great comprehensive text and reference that covers aspects of computer music, including digital audio techniques, signal processing, and musical devices. This was my bible in the beginning. Since you have an interest in Csound and sound creation another book I would recommend is “The Csound Book: Perspective in Software Synthesis, Sound Design, Signal Processing, and Programming“, by Richard Boulanger.
Finally I really love this small book titled “Audible Design”, by one of my favorite electro acoustic composer’s Trevor Wishart.
He created a very detailed description of the craft of sound transformation using software instruments, with non-mathematical explanations, and recorded music examples provided on a CD-ROM of all the processes.
DSR: As a fan of your work ,i have several questions about it: Do you think that a sound designer has to be a musician first?
RD: No, I don’t think you need to be a musician first to be a sound designer. I know a lot of sound designers who didn’t have any musical background or training. Although, at times I feel that having some musical background has helped me with certain projects.
DSR: What are your advices to compose a sound design démo ( style, format etc…)?
RD: I would try putting together a demo reel showcasing your best works. Starting out with your best work first. You want to grab their attention right away. I would also take note to keep it short. I would try and keep it around 2 to 3 minutes in length. If you are just getting into doing sound design work and don’t have much content, you can redo sound design from favorite film or even use a short clip of a silent film. I found that using 30-second commercial spots and trailers worked best, as they are usually action packed with lots of different visuals.
DSR: What are the basics to know before creating a commercial music and sound libraries?
RD: In the past working for various software companies designing sounds I always try to create a wide variety of sounds that would hopefully be helpful in various situations. I try to research what sounds are useful and what has been done in the past.
DSR: Which are your “desert island” hardware and software tools you couldn’t live without ?( except heart and brain;-)
RD: A few pieces of hardware come to mind. If I had to be on a desert island it would maybe be my Clavia Nord G2 modular, microphones and laptop. As for software would be Apple’s Logic Audio I can pretty much do anything with just thatJ
DSR:I’m a friend of Kero’s from Detroit and we’ve met a couple of times. My question I’d like to ask is, when it comes to the intricate rhythms in some of your music, how much of them are deliberately sequenced via audio or midi, and how much (what percentage perhaps) is from mashups and amalgamations of crazy sound/glitch experiments? I’d imagine that its somewhere in the middle, but I’d like to hear your direct response. Cheers!
RD: Hello, yes it’s a good mixture of both deliberate sequenced midi data, and some sound mashups from experimentation. I usually sequence most of my tracks with Battery and Kontakt. Sometimes I use external hardware to generate rhythms and then I sample them into small loops that I later import into my song. I will then loop that section and experiment usually with plug-ins and audio editing to make different variations. I will then go back and listen to which processed loop variations sound best and then start cutting and arranging them into a song.