In the this first installment of this exclusive two part interview, Damian Kastbauer talks to Jeramiah Ross, the award winning audio designer & composer of PS3 game Shatter. Also known as the producer and live performer Module, Ross discusses audio implementation for games, and how his experience as a live act influences his game audio design process. Be sure to check out his latest album, Imagineering
Can you tell our readers a little bit about yourself, and your path into game audio?
I am a classically trained piano player and have been making music since I was 4 years old. I spent my teens experimenting with 4 track recorders and synths and Amiga 500 computers.
I created Module in 2003 as a little side project after being in a few well known New Zealand bands at the time. After a few years of working on it I released my first album Remarkable Engines in 2006 which got quite recognised by everybody around New Zealand which was quite fun! I found myself having to tour to support that album that I made for a record label called LOOP at the time to pay back the thousands of dollars it took to make it.
The more I was exposed to the live environment, and what people actually wanted from a gig, the more my music started changing from the downbeat-y sort of music that I started off with into more the kind of thing Shatter sounds like, really fun electro/rock’n’roll over the top music.
It was just a direct response of being somewhere between a DJ and a one man band. New Zealand Music industry is really small and I just had to adapt. Looking back on it, It was like pick a path adventure story. Everything I did was so I could survive as a Musician. I feel good now as I am 34 and still doing it full time !
The Shatter soundtrack was the culmination of years and years of playing live, and the kind of music I’ve always wanted to create; futuristic rocky synth music that has been buzzing around my mind for quite a long time. I took everything I learned from the live environment and channeled it into the Shatter soundtrack. It seemed to have worked really well.
Module really changed and became less about being a live indie band thing and more about a business! (laughs) It was quite a crazy change, as all of a sudden I ‘m doing video game soundtracks, movie soundtracks and sound design; all these kind of project based kinds of work things, which is interesting, and totally different from the ‘set up and play a concert’ type vibe I had come from.
But that’s cool that’s the way it worked out, and that’s the great thing about being a solo artist. I think its a massive change in technology that happened between when I started and now, because you can create literally anything almost anything on a computer, and it doesn’t really matter anymore. You don’t have to be this famous entity anymore, you can just do audio or music because you like making it, and it can become your life and your job That’s what has happen to me, and I feel very lucky.
Do you find that in your work for games that you end up sacrificing creative vision?
Yes, it’s kind of a compromise, it’s more of a relationship you get into a with a creative team. or the people driving the product be it designers, producers If it’s something that’s going to make the project better and more people are going to like it, then I will do whatever I need to in order to make that happen. At the end of the day. It’s just one part of what I do. You need to split yourself out. There’s work and then there is you. Work is work, so you just need to do your best for what the project needs unless you’re working on it on your own. Then you can do what the hell you like.
It’s the creative vision versus the schedule. The schedule is all powerful. The schedule must be obeyed.
Have you ever found yourself having to write a reggae track, or something outside of your comfort zone in order to fit a gameplay segment?
(Laughs) I have done lots of different styles of music, but I guess I’m more known as Module for my electronic-y type stuff. But then again, doing a lot of other things. Like, when I was working on Rugby Challenge I was writing a lot of New Zealand based rock tunes. I don’t know if you’re familiar with anything from New Zealand like Dave Dobbyn, and The Exponents. Crowd-signing, anthem rock music. I actually enjoy making that kind of music! (laughs) I started to realise that I was slipping more into a producer kind of role, where I’ve got more scope, as opposed to limiting myself to a specific genre because it follows what I’ve established in the past. I enjoy country music, and I love rock music. I like the beats of hip-hop, My music tastes are very clear, and that come across in my compositional choices.
But even so, I get frustrated with the amount of time it takes to produce something using a computer, because I’ve come from a musical background; I’ve been playing piano since I was a kid. I love the immediacy of being in a band. I feel that people get too picky on music [production]; It’s the sloppiness and the more feeling based stuff that people respond to better, and not the over-produced stuff, you know what I mean?
Speaking with my own view, Who really gives a shit what EQ you’re using, the average person doesn’t. There is a trap a lot in music production, It’s an industry that’s not too far from the fashion industry. Use this plugin, mix like this, sound like that. It’s all rubbish. You just have to follow what feels right and if it doesn’t work keep trying.
(Laughs) But, I’m always trying to find a counterbalance between the two, and I think my life has been about that lately. I really try to find balance. Even in my own music, I’ve been writing a lot of piano compositions. As I come from a classical background, I find myself gravitating back to that style of music, away from that flashiness of game soundtracks and Module, and everything else.
I really need that sometimes, and it will probably be the kind of music I make more of as I get older! I still love the opportunities that video game music and sound design work have handed me. I’m extremely lucky in New Zealand to be one of the very few people doing what I’m doing, and have had such massive exposure to a global audience through many Apple devices and consoles.
How do you handle the dynamics of gameplay when things are quite focused on the music and sound side of things?
It’s very much a relationship between the composer and the audience. You have to think about how this is going to affect people, and what they are going to feel in these key moments and how you can take them on journeys. It’s very much like orchestrating an reaction from people in advance.
That’s the side that I find quite fascinating, because it’s almost like when you do a gig; You hit certain key moments within your set for people to react to. It’s a lot easier to achieve that in a gig, because people are in the concert, in the mood, at that moment. I mean the whole “drop” thing found in dubstep music is all about that. I guess I am trying to engineer in advance lots of mini audio drops relative to the game in some way so the player gets jazzed about it.
I’ve started thinking about video games as very similar; people are locked into that universe, or paying attention to the sequence of events and interacting with it. What can I do as a composer to enhance that? That’s generally where I will come in to it, and try to identify key moments within the experience; whether it’s from a story perspective, or a feeling, a visual or an action, or something that will give me some cue as to what the music should emotionally feel like, and what the player should react to. It’s all about context. what are the key things that are going on in this space and time. Love it!
How much of the implementation can be done from within FMOD Designer?
You can do everything in Designer in regards to setting up the project and the sound files and music and events and structure.
But It’s a very much a 50/50 kind of relationship; there is a lot of stuff that needs to be done in code, but I use a lot of FMOD parameters, velocity based settings and real time reverbs
In the last game I worked on, for which I put a 75MB audio budget straight into an iPhone device It has eleven tracks, and it has the most amount of audio I’ve put inside an iPhone game (so far). This was for Monsters Ate My Condo for Adult Swim, which is being really well received. The music is just batshit insane and is some kind of godzilla based pop music and there is a lot going on with the audio. That has been the result of geeking out and getting to know my tools, and trying to develop this really cool thing that people will listen to and enjoy, and hitting all those key moments. Alot of that was all done in designer with basic code hookups.
How much DSP parameterization are you doing in order to achieve certain effects or react dynamically to the game?
We try do that all the time, especially with low pass filters and volume its good to make sure things are sounding awesome. Shatter was really the first time we really took advantage of the parameter based DSP effects. We’ll use just those parameters, and you can hear it at specific moments, like, when you die at the end of a level, it will phase out, or would control the cutoff frequency. Not as much as other things we would use, like the crowd system I developed for the Rugby Challenge game, that was a bit insane! We used FMOD in a really cool way; basically the crowd system was several different layers of several different reactions and works the same way as if you were revving up a car engine based on emotion
When I first looked at FMOD it weirded me out. It took me a long time to understand the parameter based stuff, and that there wasn’t a timeline, because I was so used to working in DAW’s like Ableton Live. I’ve been making electronic music since the Amiga 500 days, so I’ve been exposed to technology most of my creative life, so I’ve been really lucky in that sense in that I’ve always had the kind of brain that understands that stuff. It was really cool because the more I understood how FMOD worked, the more I understood how it talked to the API, the more I realised the tools I had available as a creator and I found that really exciting.
One of the most interesting thing about Rugby Challenge was the limited memory I had to pay with, 11MB I think. At the start of the project I didn’t want to to deal with the commentary because it was over 14,000 files. So it was always space vs time vs creative vision.
Can you speak more on the dynamic system for crowds?
As you can see above in FMOD In the first layer is a low loop, then a medium loop, then a high loop, which represent different frequency ranges. When you roll over those, they play from a list of callout things, and horns and that kind of stuff. These are controlled by a parameter called ‘Emotion’, which is driven by a context bucket, so if someone starts running for a Try, it will amp up the emotion. And on top of that I’ve got some reaction cues set up. The Cheer cues play 10 different cheers at one time, which is all positioned around in surround sound randomly, which makes it sound like a whole big audience. The whole entire reactive crowd audio is only 4MB. It sounds ok, but as I only had a budget of 11MB I had to compress the hell out of it to make it work. I can really hear the compression, and its nowhere near the quality I would have liked, but considering I just placed 4 Zoom H2 mics around the Westpac stadium, I was happy with the result.
The great thing about FMOD and most of the Middleware engines for audio is that you can design everything and hand it over to a studio It’s a really good platform for me. I haven’t had the chance to explore the others as much as I would like.
Especially being able to draw parameters in the Event Editor.
I know, Its just the multiple parameters per event, that’s I love. Its like “Hey, I can have unlimited parameters here, I can do what I like!” You can get right into the detail, and when the emotion is on those parameters because they are all different reactions, its closer to being more intensity based, reactive crowd stuff. I didn’t even know if that would work, it was basically the emotion meter runs over, and just triggers that sound, I was like ” That actually works, that’s great!” The other solution was more complex, there had to be a secret event, and that wouldn’t fire because of a whole host of reasons. So I was like, “Ah, lets try this”. But you know, just really cool things like we’ve managed to get some really cool reverb definitions going. The thing with a lot of iPhone and iOS titles is that you are not really dealing with a 3D world as much, which is something I’m kind of missing a little bit.
Because I’ve been doing a lot of iPhone iOS development, and have been trying to push FMOD to the extreme in regards to what we can deliver. I do kind of miss making things for consoles, or even PC because you just have a bigger scope to work in, and it’s much more creatively satisfying working on larger scale projects. Sadly, they are few and far between! But getting things sounding awesome on the iPhone is still very satisfying.
In a way, it makes you work creatively within your constraints.
Yeah, that’s something I really enjoy. Having limits to things. It’s the new old school all over again. The pipeline we’ve got now is really quite cool, and exposing other people to that is really neat and that has ended up being quite a collaborative process. so you have to work with what you have to get things out the door.
How have you been handling the mix phase, are you connected to the game?
We used the FMOD net connection to mix straight off the iPads, which has worked really well. We can get a really good mix like that. Prior to that, we used the real time connect straight into the PS3 so that we could do surround sound mixes. But now I figured out how to launch xcode and the iPhone simulator and can connect to that instead on my local machine during development which is pretty crazy. I can build the game and be up and mixing and creating audio on a virtual device emulating a real bit of hardware. It works really well.
When you get that whole system up and running though, it’s a phenomenal feature. When working with hardware I just plug my iPhone or iPad output directly into my Pro Tools 003, which is all metered up, with some test tones I can just send out a 0dB signal and monitor everything from my computer. I’ve got some spectrum analyser software as well so I can monitor the frequency content and make sure it’s going to sound great for people.
But I don’t really bother anymore on tailoring audio to work through the speakers; When those devices came out There was a lot of emphasis on getting iPhone audio to sound good through the bullshit device speaker but I’ve given up on that now and I think everyone else should too.
There is really no point modifying the audio for it to work like that, it’s like, if you want to hear good audio, put some headphones on! The quality that you can get out of an iPhone, with the right compression codecs and a good pair of headphones is pretty close to commercial audio quality. I have some impulse reverbs that can emulate the sound of the iPhone and iPad speakers I can put across a pre-recorded output in Pro Tools to get a idea of what things will sound like and check out the frequency spectrum. There are alot of immersive games now that need good sound for the experience.
But why bother turning all the treble up, or modifying things so that it sounds better through those speakers? As long as its not annoying, obnoxious sounds that will piss people off or overload the speakers and get basic messages across to the player but sound awesome once you put the headphones on. I think that should be ke
Do you work algorithmically at all, or using any other interactive or dynamic techniques?
I’ve had quite a few projects that we just generated by lots of tuned loops randomly playing. It sounded almost like Brian Eno! Just by messing with parameters, layers and controlling different settings. You can do a lot with spawning lots of sounds within a sound definition with FMOD.
There was great because there was this beautiful generative random music that just twinkled away in the background. I’ve looked at FMODs music system and really researched it You can do basic cueing template-based themes between sections.
With iOS you can’t get away with a 32 channel stream inside memory limits or anything like that. Even the PS3 would struggle with it. So for now It’s a very much ‘pre-baked’ kind of approach, or maybe a one or two layer kind of approach. Maybe things will be better on the iPad 3. It’s a few years away still.
But I have been using alot of random sounds to make generative backgrounds such as forests and storms and city ambience. say 10 different car and horn sounds, a ambient loop and various traffic noises. It’s interesting the patterns that appear in that kind of audio are all pretty much the same but with different noise sources.
I’ve always found that these technical challenges really drive the creativity in games.
It’s pretty interesting, I remember when I first got exposed to FMOD, one of the biggest problems I had was in the API, the system I/O was defaulted to 16 channels. For the ‘Blood Drive‘ game I was working on at the time, which I had 8 months to complete,we had 8 cars, zombies were screaming and the whole game was just chaos. We had 217 events firing at once at one point. There we explosions, each car had its own explosion, each car had multiple layers of engine sounds, running over multiple zombies, it was madness! And I was trying to mix this thing!
So we developed our own code-based sidechain compression. If anything exploded, it would turn down the music, and things like that. That was a lot of fun to implement, and is part of our pipeline now as an sound effects code, so whenever we need that kind of level of control, we can always use the DSP to control a wide variety of dynamic effects. All that process that I’ve taken from using other software, and trying to figure out a way I can replicate that using DSPs and FMOD.
Those are really great tool-sets to have, particularly in games that have commentary, or vocals and music, with chaos in the background! You still need to relay that information in a way that is digestible by the player. In one game we used these effects to add a cliched Matrix-style bullet-time effect just for fun! (laughs) I do find myself doing quite cliched stuff sometimes, but having a lot of fun with it, you know, really dramatic sounds just for the hell of it!
Right, as long as it supports the gameplay you can go really dramatic. Are there any examples of game audio where you think it’s been done really well?
The game that does that best in the whole entire ultraverse is Limbo. Its understanding of space is phenomenal. I listened to that the other day in the studio and the intricate design between what’s happening with the relationship between events and the sonic is really amazing.
It’s that’s the kind of approach where that single sound has has so much design and attention, that when you hear 100 of those, it’s still phenomenal and I don’t think you get to do that much in a project often. I also think because of the way the game is playing, and the way that it is literally spotlighted, the audio and gameplay and the way it’s presented is spotlighted bit-by-bit, instead of having it all presented at once. You really get to take those moments in so the detail really needed to be there.
Part Two (Coming Soon)
Stephen Saldanha says
Nice! I’m a big fan of his music, I was hoping you guy’s would interview or feature him one day.
Andy says
I*ve never heard about Limbo. Looked at it for the first time now on YT and Jeramiah is absolutely right. What an incredible mix of a morbid yet fascinating mood and great, really suitiable sound effects. I wish somebody would ask me to do SFX for a game like that. ;)
Luca Fusi says
Andy, you owe it to yourself to play the game. One of the greatest total audiovisual aesthetic matches / implementations I’ve ever seen. It’s reasonably cheap on Steam and short to play through.