In the run-up to this month’s reverb theme, former contributor Damian Kastbauer suggested we re-run this article he put together discussing the game Crackdown for XBOX. The article may be two years old, but the content remains undeniably relevant. Never one to ignore good suggestions, here we are…
One area that has been gaining ground since the early days of EAX on the PC platform, and more recently it’s omnipresence in audio middleware toolsets, is Reverb. With the ability to enhance the sounds playing back in the game with reverberant information from the surrounding space, you can effectively communicate to the player a truer approximation of “being there” and help to further immerse them in the game world. While we often take Reverb for granted in our everyday life as something that helps us position ourselves in a space (the cavernous echo of an airport, the openness of a forest), it is something that is continually giving us feedback on our surroundings, and thus a critical part of the way we experience the world.
It may be premature for me to turn the focus of the series towards the future, as we find ourselves deep in the throes of the current generation console development, but I think by now those of us submerged in creating ever-expanding soundscapes for games at times suffer under the burden of our limitations. Of course, it isn’t all bad, given a set of constraints and creatively overcoming them can be as satisfying as coloring outside the lines.
I can’t help but feel a little Sci-Fi on occasion when I see some of the interesting work being done academically or within the DIY community. The explosion of information and accessibility to resources seems to enable those with a mind, and the time, to do so with a bottomless well of potential that, when focused, can provide the maker with something to substantiate their creative vision. Whether it’s the current craze for Kinect hacking, a modular code bending instrument, or simple pleasures of circuit bending, there are communities of people working together to unlock the inherent ability of our modern lifestyle devices. That’s not to say that every hack comes with a purpose, for some the joy is in the deconstruction, destruction, or the creation of something new.
One technique that keeps showing up in game audio is the pairing of an available game engine with a alternative audio engine not generally associated with game audio. Whether it’s the work of Leonard J. Paul using OSC (Open Sound Control) as a bridge between HL2 Source or more recently with Unity, Arjen Schut and his experiments with HL2 and Max/MSP, or this months featured Audio Implementation Greats spotlight: Graham Gatheral, I can’t help but see the resourcefulness of a few brave hearts boldly moving forward to fill a gap in the current functionality of today’s game audio engines.
[Written by Damian Kastbauer for Designing Sound]
If you talk to anyone in game audio today about successful tempo synced synergy between music and sound effects it wont take long for your discussion to end up at REZ and the work of Tetsuya Mizuguchi. The quintessential poster boy for synesthesia in video games and a stunning example of overt spontaneous interactive music creation.
But imagine for a moment stripping away the throbbing electronica pulse and replacing it with an organic instrument-based soundtrack created by one of the foremost prodigy of curiously inspired noise making bass thumpers, with sound effects locked to a groove oriented metronome, and you’ve got makings of a monster.
“Whereas many games today occupy free-formed soundtracks that respond entirely to the player, Mushroom Men is recorded to a beat. “You have sparks sparking in time to the music, and there are moments when the background music backs out and you hear the cricket cricking on beat,” says Jimi Barker, another sound designer with Gl33k.
“Piersall continues, “You want to make it seem like the world plays to a beat.””
-Austin Chronical Article: Making Mushrooms Dance
When embarking on a sequel to one of the premier tactical shooters of the current generation, the Audio team at Red Storm Entertainment, A division of Ubisoft Entertainment, knew that they needed to continue shaping the player experience by further conveying the impression of a reactive and tangible world in Ghost Recon Advanced Warfighter 2 Multiplayer. With a constant focus on creating dynamic, real-time, multi-channel systems to communicate information using sound, they successfully helped to create a world that visually reacts to sound and further anchors the player in the world. Their hard work ended up winning them the 2008 GANG “Innovation in Audio” award for the GRAW2 Multiplayer implementation of an audio system that turns sound waves into a physical force used to drive graphics and physics simulations at runtime.
The Audio Implementation Greats series continues this month with an in-depth look at the technology and sound design methodologies that went in to bringing across their creative vision. We’re lucky to have original Audio Lead/Senior Engineer Jeff Wesevich and Audio Lead / Senior Sound Designer Justin Drust laying out the detailed history of what promises to be the most extensive overview of the widely discussed feat of implementation greatness: the Ghost Recon Advanced Warfighter 2 Multiplayer: Wind System.
Hang on to your hats, and catch the breeze!
This months Audio Implementation Greats series returns with the overarching goal of broadening the understanding of Procedural Audio and how it can be used to further interactivity in game audio. While not specifically directed at any one game or technology, the potential of procedural technology in games has been gaining traction lately with high profile uses such as the Spore Procedural Music System. After reading this article it should be obvious that the idea of real-time synthesis and procedural audio in games is something I have a great interest in, and this article should be taken more as a call to arms than as a critique of our current state of affairs. In the current generation of consoles we are deeply indebted to the trailblazers who have gone before us, and I feel that in acknowledgment of the history of game audio we must do what we can to build on past accomplishments and use everything at our disposal to increase the level of interactivity within our medium.
I can’t wait to hear what’s in store for us in the future!
Today’s article is also being released in conjunction with the Game Audio Podcast Episode 4: “Procedural Game Audio” which brings to the table Andy Farnell, Francois Thibault, and David Thall, who all work in different areas of the gaming and audio industry. What starts out as a definition of procedural audio eventually ends up as a speculation of sci-fi proportions. We discuss, among other things, the role of interactive audio and how it can be used to support immersion in games, how to employ already existing systems in order to more accurately reflect underlying simulations, along with suggestions for moving forward with procedural in the future. It is an episode that has been along time in the making, and Anton and I both hope it will ignite a spark of inspiration for those of you who are interested in what procedural has to offer.
With that in mind I encourage you to explore all of the different materials presented: this article, GAP#4, and the collection of procedural related links at the LostChocolateLab Blog.
I look forward to the continuing discussion!