Guest Contribution by Alex May
If you were born in the 70’s or 80’s and played video games, you’ll no doubt have fond memories of the early days of game audio when consoles were incapable of playing back more than basic pulse waves or noise. All sounds had to be forged from these primitives, and game SFX were rarely even slightly reminiscent of anything actually real. Now, however, realistic sound is only as far away as your portable recorder or favourite sound library. Realism in sound has become accessible to the point of it being often considered a given; a basic assumption of the art.
Enter the idea of “100% synthesized SFX”. This is a self-imposed workflow limitation that declares that all sounds for a project will be synthesized, and not recorded. Foley, vehicles, weapons, combat, ambience, UI, and in certain cases even voices; all produced with synthesizers.
Wait, all synthesized? What could we possibly gain from doing things in such an inefficient and impractical manner? Surely it makes better sense to use tools and methods that are appropriate for the results we’re after, right?
Well, yes, that is true. However, being that ultimately we’re aiming to produce sound that complements the visual style of the game, it may not always be the case that recorded real-world sources are the best fit. If the visual style has a strong character about it, then so should the sound. One method for achieving this character is to place limitations on the production process, and that is what this article discusses: limiting sound production to synthesis. By doing this we can achieve an overarching “stylized realism” that, when paired with equally stylized visuals, can contribute to a sense of immersion in the game world.
Let’s now take a look at some work practices for a 100% Synthesis approach.
Impulse responses are great for recreating spaces, whether it is a resonant glass bottle or a large cave. Here’s a handy a trick for sculpting your own impulse responses, and therefore your own reverbs, from something that we spend a lot of time getting rid of — noise!
If you listen to an impulse response by itself, you’ll find that it has noise-like qualities, except the frequency response changes over time. This isn’t surprising as sine sweeps and pistol shots are representations of bursts of noise.
For the examples below, I’ve used Logic’s Space Designer, but this technique is possible with any convolution reverb. The white noise samples were processed in Logic, bounced out as a wav file and then dropped into Space Designer’s interface. [Space Designer's dry level was set to 0dB and wet level to -6dB with filter and volume envelopes bypassed]
Here’s an example of a white noise sample that was about 1.5 seconds long with an exponential fade out. The samples below include the dry noise sample (watch your speaker/headphone level) followed by the convolved output (apologies for the rather sad drum loop).
Image by Stewart Butterfield, used under a Creative Commons license. Click image to view source.
When we say “space”, people generally think of two things: outer space, or a bounded area that something fits into. It’s a safe bet that most people in the sound community immediately think of the latter. So often we focus on the characteristics of a space…how far a sound carries, reflections and reverberation time, etc. Certainly that helps us define a space, but…for the most part…only on a technical level. What really defines a space, is what occupies it. There’s no denying that production designers and location scouts in film, or level designers and artists in games, have a strong role in creating a space, but we in the sonic branch of our respective mediums have the unique ability to refine…or even redefine…those spaces they create. Sometimes, we’re even given the opportunity to create spaces where they cannot. What I want us to consider in light of that, is how we approach the creation of that space.
Image by Bust It Away Photography. Used under a Creative Commons license. Click image to view source
The world we inhabit is ever shifting. People and animals are constantly on the move. Water laps against wood or crashes against a sandy beach. If considered from a somewhat solipsistic approach the buildings, trees and mountains around us even shift in position. With all of those positional changes comes a new sonic interaction. The squirrel’s chitter no is no longer to our left, the wave passes above us when we are under water, or the reflections of nearby traffic now reflect off of a different building of steel and glass…confusing its location. Space is not fixed nor are the elements within it.
This month, we look at that mercurial idea of “Space.”
This site is a space by and for the community, and is made special by all of the contributions that come in from that community. If you would like to add something to the conversation around this month’s theme…or when we turn our attention to Synthesis next month….please contact us through the contact form or by e-mailing shaun (shift+2) [this site].
Exercising listening in a public outdoor space.
Sound designers by nature have an inherent curiosity towards sound. We explore the way sounds work every time we approach a project. With each new opportunity to design a sound, we ask ourselves questions such as: What object/event produced the sound(s)? Where is the sound source located in relation to the listener, and just as importantly, how does (or how will) the sound impact an audience’s emotional state when heard?
It goes without saying that the sheer act of producing our own sonic work, and by critically listening to and dissecting the works of others (as Berrak Nil Boya explored and extrapolated on in her recent post) will inherently make us stronger and better critical listeners. Though along with these practices, it is invaluable to also step away from evaluating completed, produced works and critically listen to some alternate sound sources, and in some potentially new ways; just like exercising a muscle, the more angles you can target your critical listening “muscle”, the stronger and more well-rounded it becomes.
The question then must be, other than by evaluating an already existing game or film’s audio as it was intended, how, and what, can we listen to in order to hone our listening abilities?
This post looks to add to this conversation by offering a few exercises I’ve picked up and augmented over the years and still use to this day. Once again, just like any exercise routine, training your critical listening is an on-going responsibility for any sound designer (though vitally important early in your career, continued practice is essential to maintain a high level of critical listening fitness).
Guest Contribution by Berrak Nil Boya
As a composer, musicologist and a sound designer who is making a transition to the world of game audio for the last year or so, not only do I have a new level of respect for everyone who works as a game audio professional but I also became aware of various changes I am going through almost daily to adapt my already established skill set and mentality to fit my new chosen profession. These changes affect different aspects of my auditory world to varying degrees, but listening and specifically critical listening ended up being a new kind of challenge for me. As a musician who is used to listening critically to music and its various properties, and as a musicologist who researched film music for years, the inherent interactivity and flow of the gaming experience required a new type of listening capability from me. One that depended on me to not just pay attention to the different aspects of the soundscape, but also to rise to the challenges that were presented to me by the game to succeed as a gamer. It meant orienting my attention to the other aspects of the game; so much so that, I forgot to listen for a while and instead just heard what the soundscape consisted of. So how would it be possible for me to play a game AND critically listen to its audio aspects at the same time?
Back around the time I was first starting out, I remember opening up a demo of Cubase VST (on my trusty PowerMac 6400) and taking a look through the various menus. Everything seemed pretty standard, but something in particular caught my eye, a menu item labeled “Ears Only”. Curious, I clicked on it, only to have my monitor go completely blank. After a few seconds of panic thinking I had broken everything, I realized that Steinberg had programmed a mode that completely disabled the monitor and forced you to just listen. At first, this option seemed like a strange addition. Why, when I’m creating sound, would I not be listening to what I’m doing? Listening while working with audio seemed like a no-brainer. However, after gaining a little more experience, this “just listen” mode began to make a lot more sense.
Let’s start out with what to listen for in a recording location. Naturally, we’re always going to be looking for a space that isn’t going to introduce too many environmental and human generated artifacts into the recording, but the physical layout and acoustic properties of a location can contribute as much character to your recordings as microphone selection…sometimes even more. On top of that, recording vehicles and weaponry (what you’ve specialized in) isn’t something you can do just anywhere. So, what do you listen for when scouting potential recording sites?
The biggest problems I face when searching for a recording location is traffic, especially airports and expressways. I’ve scheduled multiple jobs where I had to find ideal locations away from these environments. Fortunately I live and work in a quieter area away so I don’t have to travel too far. However, that rare Ferrari I need to record is located in the middle of a downtown so it’s crucial to make generous car owner friends who are willing to drive an hour or so to a quieter location. Most microphones I’ve tried are quite sensitive in capturing unwanted background sounds. This is why I often use my Sennheiser MKH-418s M/S shotgun mic. For isolation with a mono mic I use either my Neumann 82i or the Rode NTG8. On bigger budget jobs I will rent the Neumann RSM-191s mic (probably one of the best field recording mics ever made).
Guest Contribution by Rodney Gates
Welcome, and thanks for checking out this (TL;DR) article on the creation of the virtual instrument sample library, GuitarMonics, designed for Native Instruments’ Kontakt software. It was a long road from concept to completion, and I thought it might be a good idea to discuss some of the processes and discoveries I learned along the way for those that may be interested in creating their own sample libraries, for commercial or personal use.
Having been a Sound Designer and Audio Director for video games for over a decade now, and always a huge fan of virtual instruments that load up in the computer and sound stunningly real, I felt the desire to branch out into this field and begin establishing a foothold of my own with my new company, SoundCues.
With this article I really wanted to find out about the nuts and bots of vehicle engine sound design and implementation. So I contacted a few people and got some great responses and a fascinating insight into the process. My thanks to Stephen Baysted, Audio Director and Composer at Slightly Mad Studios, Greg Hill, Sound Designer at Soundwave Concepts, Adam Boyd, Sound Designer and John Twigg, Software Engineer at Crankcase Audio and Nick Wiswell, Audio Creative Director at Turn 10 Studios.