Guest contribution by Mirella Diez Moran
My name is Mirella, I’m a video game sound designer and I’m Spanish too. I guess that in any other country none of these things would be particularly relevant, but given that there are very few people working in video game sound over here-not many more than twenty people-I guess I’m like some sort of unicorn.
When I decided I wanted to work as a video game sound designer, I had already worked in a few audiovisual projects. The problem with the Spanish film industry is that most companies are pretty crowded, so I was aware that it would be almost impossible to make a living out of working in it. It was then, when I played a game called “Sword & Sworcery”, that I realized that I could try entering the video game industry. I’m also a gamer since I was little, so I thought it was a brilliant idea.
Tim Prebble’s modular
The first time I saw a modular synth, I was taken aback by the massive nest of patching cables, seemingly flying off in all directions and connecting various devices with countless knobs and flashing lights, somehow creating all kinds of strange sounds. Coming up in a mostly digital world, such a mass of wiring was somewhat foreign to me. Sure, I had put together studios before, but those kinds of wiring setups were far more linear, at least as far as I was concerned. While I had spent a lot of time with Propellerhead’s Reason, virtually patching together all kinds of sound modules, I couldn’t even begin to compare it to the sight of a rack of analog modular hardware. However, I finally got to sit behind a modular at the NAMM show in Anaheim, California last year, and after just a few moments of fiddling, I was hooked.
Used under a Creative Commons license. Click image to view source.
I was born in England in 1988. Some of my earliest memories involve old BBC and Mac computers. I grew up listening to CDs, MiniDisks, playing “Duck Hunt” on my sister’s NES. The dial-up modem sounds are imprinted on my memory. I recall my father ordering books from Amazon.com back when that’s all Amazon sold. In my teen years I assembled my own computer to save money and grew to appreciate the inner workings of a computer. What I’m trying to say is, I’m an early product of the digital age, it’s all I’ve known.
Guest Contribution by Scott Kramer
With Haunting Melissa, we sought to bring feature film quality sound to an iOS app. It was an incredible experience, so when Neal Edelstein asked me to join the Hooked Digital Media team for Haunting Melissa: Dark Hearts, I jumped at the chance. Like many of us, my “day job” is designing sound for feature films and television. Hooked is breaking new ground by delivering filmed content via an app, and we thought the sound should be high-quality but also pioneering.
With that in mind, I entered an R&D phase to develop new standards and technologies for this medium. I’m a firm believer in mixing a project in the format and monitoring environment where it will most commonly be heard. We decided to mix these stories on headphones in the hopes that the audience would prefer that experience over the apple device’s built-in speaker, and we’ve found that they often do. Next, I began to research ways to deliver a truly immersive experience using an ordinary pair of headphones.
Guest Contribution By Chanel Summers
As a woman who has built her own career on a platform of game audio, game design and game production, I am passionate about programs that teach and empower women to follow a similar path. As there are such few women in the field of video game audio, fewer are even aware of the opportunities. I have been on a mission to try and change that – trying to introduce this field as a career option to young women and show that women can lead in this field and be highly successful — and perhaps even change the complexion of the video game industry. The reason this is so important is that for an industry or a creative medium to achieve its full potential, it must draw strength from diversity — a diversity of backgrounds, cultures, perspectives, and experiences. Each person approaching opportunity from a different starting point keeps things fresh, vibrant, exciting and new.
That is why I found myself, two years ago, at Forest Ridge School of the Sacred Heart, an all-girl’s school in Bellevue, Washington, proposing a summer workshop called, “Artistic Expression in Game Audio Design”. The workshop would give young women an artistic and technical foundation in audio for interactive media and expose them to the career possibilities in video game audio. It would be based on the class that I created and teach at USC’s Interactive Media & Games Division in the School of Cinematic Arts (“Audio Expression”), taking a semester-long course and turning it into an intensive one month long workshop for Forest Ridge. Because we chose not to “adapt” the material for a younger audience, these girls would get the same material I teach to undergrads, grads, and PhD students. In fact, it would be even more intensive, as they would have class every day for four hours each day rather than once a week. By choosing not to “dumb down” the curriculum for students just because they are younger or new to the field, we showed that we respected the young women, which they in turn responded to with vigor.
Knowing your way around audio middleware is quickly becoming a required skill to get a job in the game audio industry. If you are a sound designer and/or a composer that is looking to break into the world of game audio, learning how to work with various audio middleware solutions will not only give you a head start and set you apart from the “competition”, but it will also give you a greater understanding of how the technical side of things works and consequently you will have a greater appreciation of the inner workings of game audio. After Audiokinetic and Firelight Technologies announced their free license options (granted with some limitations), making Wwise and FMOD Studio available at no cost for the indies/small game development companies as of last year, now these programs are being used more than ever. There is no reason for you to not employ these options to create a more interactive and coherent soundscape for the game you are working on while also making life easier for yourself and the game developers.
But on the vast sea of knowledge and misinformation that is called the internet, how would you know where to start learning about these programs? Well, this is a guide to hopefully help you with that by providing you with a general outline of which resources and learning options are available right now for you to find out more about audio middleware as quickly and efficiently as possible.
Clearly the fates have decreed that I should not only be involved in the writing of a new audio degree as education month comes around, but that I should also be well into my own studies, working towards a Master’s degree in Sound Design. However, in getting to this point, my own audio education has meandered along most of the routes one might take in the pursuit of a career in audio. I’ve volunteered at studios, received on the job (and in the pub) training. I’ve studied at private colleges and run my own studio. Each of these diversions had an intrinsic value and it’s unlikely I would be in the position I am now without having taken them. However, as both a lecturer and a student, I am acutely aware that there are mixed views as to the value of a formal audio education, not just from potential students, but also from employers and practitioners (i.e. this interview from a few weeks ago). So I thought it might be useful to talk a little about the nature of writing an audio degree, from the middle so to speak. (Just to note, I am based in the UK so this relates to the process’s undertaken here. I can’t speak for anywhere else.)
Photo by Hunter Desportes
The year 2014 has been one of many great articles, interviews, and discussions here at Designing Sound and we want to thank all of our readers for their attention, suggestions, contributions, and overwhelming support. There have been so many great films, shows, games and events this year that we thought we would share some of our favorites for you to go back and check in case you missed them!
This post is full of links and Youtube videos, so please be patient on the loading. I assure you it is worth it!
When designing audio we are often thinking of time across a large variety of units: samples, milliseconds, frames, minutes, hours and more. This article is inspired by a conversation I had with Andy Farnell about a year ago at a pub in Edinburgh, right before a sound design symposium, where we discussed about time and the role it plays when it comes to designing audio.
Like most other audio designers out there, I started twiddling the knobs and sliders well before I had an understanding of the underlying DSP. It was eye-opening experience to realise that almost every single DSP effect is related to time. So let’s start looking at a few common DSP tools used in everyday sound design and analyse how time and the precedence effect plays a role, starting from hundreds of milliseconds all the way down to a single sample.
The precedence effect is a psychoacoustic effect that sheds light on how we localise and perceive sounds. It has helped us understand how binaural audio works, how we localise sounds in space and also understand reverberation and early reflections. From Wikipedia:
The precedence effect or law of the first wavefront is a binaural psychoacoustic effect. When a sound is followed by another sound separated by a sufficiently short time delay (below the listener’s echo threshold), listeners perceive a single fused auditory image; its spatial location is dominated by the location of the first-arriving sound (the first wave front). The lagging sound also affects the perceived location. However, its effect is suppressed by the first-arriving sound.
You might be familiar with this effect if you’ve done any sort of music production or mixing. Quite often a sound is hard panned to one of the two stereo speakers and a delayed copy (10-30ms) of the sound is hard panned to the other speaker. Our ears and brain don’t perceive two distinct sounds, but rather an ambient/wide-stereo sound. It is a cool technique for creating a pseudo-stereo effect from a mono audio source.
The first 30 seconds in the video below shows an example of the precedence effect in action. The delayed signal smears the original signal with phasing artefacts after which it seems to split from the original signal and become a distinct sound of its own.
Echos And Reverb
Echos are distinct delays. Reverberation is made up of early reflections which are delayed sounds that arrive first at the listener (right after the direct sound) followed by a tail that consists of many such delays diffused into a dense cluster. Artificial reverbs are quite often approximated using networks of delays that feedback into each other (convolution reverbs behave a differently).
Want to see something that’ll mess with your head?
Now, you may not have noticed anything all that strange watching the video, but mute the sound and watch it again. After that, close your eyes and listen to just the audio. Notice anything strange now? You’ve just witnessed one of the more interesting perceptual illusions, the McGurk effect.