It may be premature for me to turn the focus of the series towards the future, as we find ourselves deep in the throes of the current generation console development, but I think by now those of us submerged in creating ever-expanding soundscapes for games at times suffer under the burden of our limitations. Of course, it isn’t all bad, given a set of constraints and creatively overcoming them can be as satisfying as coloring outside the lines.
I can’t help but feel a little Sci-Fi on occasion when I see some of the interesting work being done academically or within the DIY community. The explosion of information and accessibility to resources seems to enable those with a mind, and the time, to do so with a bottomless well of potential that, when focused, can provide the maker with something to substantiate their creative vision. Whether it’s the current craze for Kinect hacking, a modular code bending instrument, or simple pleasures of circuit bending, there are communities of people working together to unlock the inherent ability of our modern lifestyle devices. That’s not to say that every hack comes with a purpose, for some the joy is in the deconstruction, destruction, or the creation of something new.
One technique that keeps showing up in game audio is the pairing of an available game engine with a alternative audio engine not generally associated with game audio. Whether it’s the work of Leonard J. Paul using OSC (Open Sound Control) as a bridge between HL2 Source or more recently with Unity, Arjen Schut and his experiments with HL2 and Max/MSP, or this months featured Audio Implementation Greats spotlight: Graham Gatheral, I can’t help but see the resourcefulness of a few brave hearts boldly moving forward to fill a gap in the current functionality of today’s game audio engines.
I had a chance to talk with Graham and discuss his work using the Unreal Development Kit and OSC in conjunction with SuperCollider, an environment and programming language for real-time audio synthesis.
DK: Hey, nice to meet you!
GG: Yeah, you too!
DK: You’re in Vancouver, is that what I see?
GG: That’s right yeah, I moved here last summer with my partner who got a job over here, so I’m over here waiting for my Visa to come through, and I’ve got a lot of time on my hands which I’ve been devoting to game audio.
DK: Fantastic, and doing some really cool things working on good mods, did you do some university and graduate?
GG: That’s right, I did a Masters degree in sound design for the screen at Bournemouth University in the UK. That was about 6 years ago, and after that I worked in post-production sound for film and TV. I’ve been doing bits of freelance work since then, been making music doing live improv electronic gigs, working as a webmaster, and teetering on the fringes of game audio for a bit of that time. So yeah, that’s what I’ve been up to.
DK: Yeah, when you say teetering on the edge of game audio, it’s like, the stuff you’re doing is very edgy. The work with integrating SuperCollider with UDK, it’s cool work, I dig it a ton because it brings together two technologies that are not usually tied together.
GG: Yeah, I’ve been using SuperCollider for live performance since 2004, so I’m reasonably OK coding in SuperCollider. When I started in game audio seriously there was one [UDK] project in particular where I thought “surely I could do this more easily and more effectively in SuperCollider – wouldn’t it be great to hook up UDK and SuperCollider somehow?” So that’s kind of what I’m doing.
DK: It’s a little bit sci-fi for people who are not used to that, again you kind of came from linear and film, but your education then bred you a bit on the SuperCollider side of things and marrying that with UDK did you find that there were already things in place that would let you do that?
GG: Absolutely yeah. When I had an idea to do this I started looking around and obviously did a bit of Googling, and I found a project called UDKOSC, which is a project run by Rob Hamilton at Stanford CCRMA department. Basically he’s been doing a fair bit of work over the years with game environments and interactive music; in this instance he’s using UDK as a tool for music performance, sound installations, networked music performances – this kind of thing. So I got in touch with him and fortunately he was able to send me a few scripts and give me some advice get me going. It seemed fairly close to what I was trying to pursue.
DK: Right, and you mentioned having a project in mind that seemed well suited to the SuperCollider workflow. What was it about SuperCollider, or even the idea of procedural or synthesis that you felt like you could accomplish better by marrying those two toolsets that you couldn’t do inside of UDK or instead of standard game audio middleware.
GG: Well my experience is really only with UDK at the moment, I don’t have any real world experience with middleware like FMOD or Wwise, although I’ve obviously had a good look at them. I enjoy working with UDK, I think Kismet is great, you can get a lot done, it’s quite a creative tool, I like the modular approach. It’s obviously limited, so with SuperCollider or any sound synthesis language, I think the main accomplishment is going to be better immersion. I think that if you’re looking at almost infinite varieties of sound for things like weapons, wind, or collision impacts, this kind of thing, and also a closer matching of sound to the visual, it seems like you can get a better hookup between the mechanics of what’s going on in the game engine to what’s happening in sound – a closer marriage there.
DK: Right so, with all of the parameters going on within a game that can be fed over to into the synthesis tool or programming language, you have all of that data that you can use to affect things in real-time, and something like SuperCollider seems like it’s really well tooled for taking in those parameters taking in that data and tying it to aspects of the sound and then manipulating it and working with it.
GG: Yeah exactly, I think it’s super flexible; you can do so much in SuperCollider. You could have, say, thirty lines of code for a metal impact and have a bunch of parameters in there which you could adjust to provide sound for anything from, say, a small solid metal object right up to a big hollow metal object. So in thirty lines of code you’ve wiped out hundreds of assets that you normally have to deal with. So there’s clearly a space saving advantage there and also a time saving advantage for the sound designer I guess.
DK: Definitely, and it just seems like the pipeline between those two is a way clearer delineation, in that they can talk to each other using the same language of parameters and values and it’s already set up to do that without having to be highly modified, SuperCollider just comes out of the box in expectation of receiving all of this parametric data to manipulate sound.
GG: Yeah that’s it, SuperCollider has obviously been around for a while and it can take in OSC, so anything that can feed out OSC can control SuperCollider. This is essentially what Rob Hamilton’s project is, a way of getting OSC out of UDK.
DK: On that note, can you explain to us the pipeline of how that communication happens between game and audio engine, what kind of pieces did you have to put in place and what is the channel of communication between the two?
GG: So effectively UDK OSC is using the .DLL bind function feature in UDK – it’s calling in a function in the .DLL which is compiled from a bunch of OSC C++ classes which Ross Bencina wrote that can send OSC. I guess what you’re doing really, is creating the OSC message in UnrealScript (or getting the bits you want to send via OSC to your client into UnrealScript) and calling the function in the .DLL to get it out as OSC. It’s UnrealScript coding.
DK: Sure, so you’re specifying that in Unreal, and you have to explicitly specify with an Unreal script to be communicated to the .DLL and then converted to OSC that SuperCollider can then listen for and pickup.
GG: That’s pretty much it yeah. You set up a OSC Responder node in SuperCollider and make sure it’s listening on the right port, and you’re away. I’ve made a Kismet node for doing this so I can feed in any values, floats, integers or what have you, via Kismet using this link to the .DLL.
DK: Right, and as OSC…is OSC open source, or a standard of some sort?
GG: Yeah, Open Sound Control it’s an open source (library).
DK: …and this is a protocol (able to be used by) a ton of different applications not just SuperCollider but also PureData, Audiomulch, all kinds of applications that (can) use OSC to communicate information interoperably.
GG: Absolutely yeah, there’s tons of clients out there that can take in OSC that could be used for this application as well I guess.
DK: That’s beautiful to me because now you can get information out of the game, bring it over to an application where you can use that to manipulate sound or manipulate the audio. You’ve got some great examples up on your website at http://www.gatheral.co.uk and I’m seeing things for procedural weapon fires, synthesized creaking doors, the metal impacts that you mentioned…a lot of really interesting stuff using SuperCollider so you’re well on your way to some of that.
GG: Yeah I think so, it’s dependent on how good my synthesis skills get really. I’m still learning all of this stuff and getting better at SuperCollider, but I can see a time when I can get a lot more sounds in for a level in this way. And sounds that don’t synthesize very well can also be played back through SuperCollider, using the buffer. So even linear samples that would normally be played through the audio engine could be controlled in interesting ways. You could affect the rate of playback in a non-linear way, you could chop the sample up and playback only parts of it, or maybe combine tiny parts of the sample to make something that’s always new and interesting. I wouldn’t say that I’m aiming to get every sound in the game to play through synthesis in SuperCollider, but there’s certainly ways to get samples to play back in interesting ways too I think.
DK: Absolutely, you have a lot more tools in the toolbox than a modern game audio engine for manipulating sounds. I guess I wonder, when you are outputting this OSC data, you’re not turning off the audio engine for Unreal, for UDK…you still have that audio engine available to do things with is that correct?
GG: That’s right yeah, so you could use a combination of both.
DK: We’ve seen a few attempts at tying different game engines to different external (audio) implementations, whether it’s Leonard Paul and PureData tied to the Source Engine or Arjen Schut doing something similar. There’ve been a few different applications of this: getting information for a game and using it for an external application to do audio with. I think it’s trailblazing stuff, because as people involved in interactive audio we want the ability to use all of this real time data and harness what’s happening in the game and, kind of like you said in the beginning, to react dynamically to what’s going on in the game.
GG: Yeah exactly, you know it’s just a closer match between the audio and the visual for me. A fairly good example is the creaky door test I did; where you’ve got very small movements of the door it will only make a few creaks here and there – larger movements increase the rate of the creak playback. So rather than playing a sample every time or a bunch of different samples every time the door moves, it’s a much closer marriage between the audio and the visual, and I think that aids the immersion… for me anyway.
DK: Absolutely, and the idea of dynamic…we’re not talking about canned one shot samplers in representation of a dynamic event like this door you’re talking about, for which there’s a great example on your website. So being able to react dynamically to it with sound, it definitely takes you closer to being there with it and reacting with it.
GG: Yeah, it just seems more real. I mean, like I said it’s dependent on how good you are at synthesizing sound; it’s possibly not as real sounding as a sample of a creaking door but I think it’s a trade-off between that and how close the match is between what’s happening on the screen and what you’re hearing. I think that’s a worthy trade off.
DK: Definitely, and I think our models are increasing in their complexity as far as what sounds good as far as synthesis goes. I mean it’s not too far a stretch to take wind, for example, and synthesis the aspects of that successfully or believably. I think that as we move forward with this technology if we focus our attention on trying to make things sound better, they will get better.
GG: Yeah definitely! I was listening to the Game Audio Podcast #4 (Procedural Audio) and someone was talking about how this approach to procedural audio for games would free up a lot more time for Sound Designers to become sound synthesists in a way, and you know, why not? Having more time could certainly lead to much better synthesis and better sound using procedural audio.
DK: Absolutely, the specialization of a sound synthesist. And then also I think, presets with synthesis is nothing new, and the idea that you would open up your game audio toolbox and it would come with a preset for “Creaky wood door” and “Creaky Metal door”.
GG: Yeah, and half a dozen parameters that you can tweak.
DK: Exactly, feed in this data, get out this sound. I think that it would be a good place to start, and if that’s not selling it for your game, if your game is all about creaky wood doors, maybe you need to go with sample content. But if there’s three creaky doors in your game and you can get by with the “Creaky Doors” preset then bingo, it wouldn’t be a bad thing. We certainly have enough to focus on in other areas of sound for games, it couldn’t be bad. It’s cool talking to you about it though because, again, I feel like there’s a big future fort this synthesis side of things with the ability to just turn on a game and have some things out of the box that just work. Whether you want to call them presets or things that come embedded I just think that there’s a lot to worry about when it comes to game sound, and we could use a few freebies.
GG: Yeah yeah, I think it would free up a lot of time to get much more detail into the game as well. I’m fairly new to all this so I don’t know what it’s like, but I’m wondering if there’s a lot of collisions or other aspects to the game that don’t get the kind of time devoted to them that they need for decent sound; I don’t know if freeing up time for the sound designers would allow more time to pursue the intricacies of the game.
DK: One would hope so for sure. I mean, there’s always things that go unscored, at the same time not every door is about the creaky door hinge for example, you have to choose what’s important and put your time and effort into making sure that sings and sells. Whether that’s the Foley in a free running platformer like Mirror’s Edge, or the weapons sounds in FPS of the year, you just have to focus on what’s important for the gameplay, I think that comes above everything else and then if you can add detail to the other aspects of the world that make sound, you kind of just give it the attention that it deserves, or at least hope to.
GG: Yeah
DK: Yeah, dig. I like the direction you’re taking with the project I can’t wait to see more. Every video you release is an awesome thing to behold, I can see the experimentation, I can see the direction you’re going and it’s really cool stuff.
GG: That’s really cool, thanks very much. I should say that Rob Hamilton is going to host the code for UDKOSC on Git fairly soon so if anyone wants to get their hands on it, that’s where it will be. [UPDATE: UDKOSC is HERE!]
DK: That’s gonna be great. What’s that going to include?
GG: It’s going to include some Unreal scripts, the .DLL, and basically everything you need to get going.
DK: Awesome, I’ll be sure to point people to that, and look forward to the experimentation that comes out of it.
GG: Cool.
That’s it for this time, thanks for peering into the crystal ball of future forward interactive audio with Audio Implementation Greats.
Be sure to check out Graham’s excellent Blog as well.
Here’s a round up of related links:
Graham Gatheral: UDK+Supercollider: real-time synthesis for sound effects
Rob Hamilton says
Just adding a link to UDKOSC up on github:
https://github.com/robertkhamilton/udkosc