Exercising listening in a public outdoor space.
Sound designers by nature have an inherent curiosity towards sound. We explore the way sounds work every time we approach a project. With each new opportunity to design a sound, we ask ourselves questions such as: What object/event produced the sound(s)? Where is the sound source located in relation to the listener, and just as importantly, how does (or how will) the sound impact an audience’s emotional state when heard?
It goes without saying that the sheer act of producing our own sonic work, and by critically listening to and dissecting the works of others (as Berrak Nil Boya explored and extrapolated on in her recent post) will inherently make us stronger and better critical listeners. Though along with these practices, it is invaluable to also step away from evaluating completed, produced works and critically listen to some alternate sound sources, and in some potentially new ways; just like exercising a muscle, the more angles you can target your critical listening “muscle”, the stronger and more well-rounded it becomes.
The question then must be, other than by evaluating an already existing game or film’s audio as it was intended, how, and what, can we listen to in order to hone our listening abilities?
This post looks to add to this conversation by offering a few exercises I’ve picked up and augmented over the years and still use to this day. Once again, just like any exercise routine, training your critical listening is an on-going responsibility for any sound designer (though vitally important early in your career, continued practice is essential to maintain a high level of critical listening fitness).
Guest Contribution by Berrak Nil Boya
As a composer, musicologist and a sound designer who is making a transition to the world of game audio for the last year or so, not only do I have a new level of respect for everyone who works as a game audio professional but I also became aware of various changes I am going through almost daily to adapt my already established skill set and mentality to fit my new chosen profession. These changes affect different aspects of my auditory world to varying degrees, but listening and specifically critical listening ended up being a new kind of challenge for me. As a musician who is used to listening critically to music and its various properties, and as a musicologist who researched film music for years, the inherent interactivity and flow of the gaming experience required a new type of listening capability from me. One that depended on me to not just pay attention to the different aspects of the soundscape, but also to rise to the challenges that were presented to me by the game to succeed as a gamer. It meant orienting my attention to the other aspects of the game; so much so that, I forgot to listen for a while and instead just heard what the soundscape consisted of. So how would it be possible for me to play a game AND critically listen to its audio aspects at the same time?
Back around the time I was first starting out, I remember opening up a demo of Cubase VST (on my trusty PowerMac 6400) and taking a look through the various menus. Everything seemed pretty standard, but something in particular caught my eye, a menu item labeled “Ears Only”. Curious, I clicked on it, only to have my monitor go completely blank. After a few seconds of panic thinking I had broken everything, I realized that Steinberg had programmed a mode that completely disabled the monitor and forced you to just listen. At first, this option seemed like a strange addition. Why, when I’m creating sound, would I not be listening to what I’m doing? Listening while working with audio seemed like a no-brainer. However, after gaining a little more experience, this “just listen” mode began to make a lot more sense.
Let’s start out with what to listen for in a recording location. Naturally, we’re always going to be looking for a space that isn’t going to introduce too many environmental and human generated artifacts into the recording, but the physical layout and acoustic properties of a location can contribute as much character to your recordings as microphone selection…sometimes even more. On top of that, recording vehicles and weaponry (what you’ve specialized in) isn’t something you can do just anywhere. So, what do you listen for when scouting potential recording sites?
The biggest problems I face when searching for a recording location is traffic, especially airports and expressways. I’ve scheduled multiple jobs where I had to find ideal locations away from these environments. Fortunately I live and work in a quieter area away so I don’t have to travel too far. However, that rare Ferrari I need to record is located in the middle of a downtown so it’s crucial to make generous car owner friends who are willing to drive an hour or so to a quieter location. Most microphones I’ve tried are quite sensitive in capturing unwanted background sounds. This is why I often use my Sennheiser MKH-418s M/S shotgun mic. For isolation with a mono mic I use either my Neumann 82i or the Rode NTG8. On bigger budget jobs I will rent the Neumann RSM-191s mic (probably one of the best field recording mics ever made).
Guest Contribution by Rodney Gates
Welcome, and thanks for checking out this (TL;DR) article on the creation of the virtual instrument sample library, GuitarMonics, designed for Native Instruments’ Kontakt software. It was a long road from concept to completion, and I thought it might be a good idea to discuss some of the processes and discoveries I learned along the way for those that may be interested in creating their own sample libraries, for commercial or personal use.
Having been a Sound Designer and Audio Director for video games for over a decade now, and always a huge fan of virtual instruments that load up in the computer and sound stunningly real, I felt the desire to branch out into this field and begin establishing a foothold of my own with my new company, SoundCues.
With this article I really wanted to find out about the nuts and bots of vehicle engine sound design and implementation. So I contacted a few people and got some great responses and a fascinating insight into the process. My thanks to Stephen Baysted, Audio Director and Composer at Slightly Mad Studios, Greg Hill, Sound Designer at Soundwave Concepts, Adam Boyd, Sound Designer and John Twigg, Software Engineer at Crankcase Audio and Nick Wiswell, Audio Creative Director at Turn 10 Studios.
The gents with impressive facial hair over at the Beards, Cats and Indie Game Audio Podcast have glommed onto this month’s theme of “Listening”. You can check out the full episode here.
Thanks go out to Matthew Marteinsson (@mattesque) and Gordon McGladdery (@AShellInThePit) for contributing to this month’s discussion!
You can be forgiven for assuming Jack picked this picture, but it was all me…I knew he would approve though ;)
We have two words that are most commonly used to discuss how we interact with sound: hearing, and listening.
Hearing is a passive act. Pressure waves move our eardrums, the motion is converted to an electrical signal, and our brain tells us that there is a sonic phenomenon in the space around us…perhaps it even provides us with identifying information. It’s what comes after that is fascinating, when we stop to LISTEN to the source. The act of directing attention allows us to focus in on the sound, to the (albeit sometimes limited) exclusion of others. Sometimes the steering of that attention is a subconscious mechanism, but the act of listening is always a conscious one.
That’s our focus this month; “Listening.”
We here at Designing Sound always appreciate the community’s enthusiasm and contributions to the discussion, and we know the community also appreciates anytime a member does. If you’d like to contribute to this month’s topic, drop us a line; either through the contact form, or to ‘shaun [at] this website’. If you prefer to plan ahead a little, next month’s topic will be “Space/Spatial.”
Guest Contribution by David Nichols
An engine is, in essence, an air pump. Air comes in, gets mixed with fuel, goes bang, and leaves again. When talking about ways to make more power, the most obvious is to make a bigger bang. However, gasoline works best at a very specific ratio of fuel to air, which is roughly 14:1. So, if you want to make a bigger bang, you need 14 times more air than your increase in fuel to get it.
When trying to get more air, one solution is to use a bigger engine. More, larger cylinders means the pump can inhale a bigger breath, which means more fuel and more power. However, this so-called “natural aspiration” or NA for short, has a limitation in air pressure. Just like a straw, the inhale of an engine works by creating low pressure, which atmospheric pressure then fills in. So, another way to get more air into an engine is to pressurize it, or use “forced induction.”
There are a few different methods of forced induction, but today I want to talk about one in particular: turbocharging. A turbocharger is a turbine that is connected to the exhaust gas leaving the engine on one side, which then drives an impeller on the other side to create air pressure. The more and faster exhaust gas comes out of the exhaust, the more and faster the intake side compresses air. When the amount of pressure generated by the propeller is greater than atmospheric pressure, the system is making “boost” and the amount of boost can be measured in PSI.
Years ago, when I first started dabbling in the deep and dark world of Max/MSP, I attempted to create the sound of a car engine. This month’s theme (which is ‘vehicles’, if you didn’t know) reminded me about it. I opened up the patch after ages and was a bit appalled by the state of it. There are hidden skeletons in every old patch!
Instead of digging through a dated project, I recreated a patch/idea I had used about a year ago when designing sounds for a remote controlled toy airplane. I tried to adapt the simplicity of that implementation to a ‘regular’ car engine.
Here’s a sample of what it sounds like (all synthesised):
This patch was put together fairly quickly and could do with more refinements to improve the character and reduce the amount of ‘digital-ness’ in the sound. The model quite obviously breaks at higher frequencies/RPMs.
Here’s the patch: