In Part One we took a look at some of the fundamentals involved with orchestrating the sounds of destruction. We continue with another physics system design presented at last years Austin Game Developers Conference and then take a brief look towards where these techniques may be headed.
UNLEASH THE KRAKEN
In Star Wars: The Force Unleashed we were working with two physics middleware packages: Havok Physics, and Pixelux’s Digital Molecular Matter (DMM). In addition to the simulation data that each provided, we also needed to manage the relationship between both. While Havok has become a popular choice for runtime physics simulations, the use of DMM spoke to the core of materials and provided each object physical properties enabling – in addition to collision’s – physically modeled dynamic fractures and bending. In some ways tackling the sound for both systems was a monumental undertaking, but there was enough overlap to make the process more pleasure than pain.
Before Jumping into the fray, I just wanted to take a moment to echo a couple of things that were touched on in the companion this article; specifically, that collaboration and iteration are the cornerstones of a quality production when it comes to systems design. Collaboration, because the stakeholders involved usually include people across all disciplines; from programmers to sound designers, modelers to texture artists, build engineer’s to game designers. Iteration, because the initial vision is always a approximation at best and until things get moving, it’s difficult to know what the eventual shape things will take.
While simultaneously reigning in and letting loose the flow of creativity ebbing and flowing across the development team, there is nothing more important than the support of your colleges. Leveraging the specialties of different people helps to bring new idea’s to situations in need of a solution. Your greatest asset as a team member is to recognize and respect the uniqueness of your co-workers and stay open to the constantly shifting requirements of the game. Good listening and better communication will improve the productivity of meetings, and reinforce the fundamental desire of everyone – to craft the best player experience possible.
In part one of a two part series on physic sounds in games we’ll look at some of the fundamental considerations when designing a system to play back different types of physics sounds. With the help of Kate Nelson from Volition, we’ll dig deeper into the way Red Faction Guerrilla handed the needs of their GeoMod 2.0 destruction system and peek behind the curtain of their development process.
SYMPHONY OF DESTRUCTION
Physics, the simple pleasure of “matter and its motion through spacetime”.
In games we’ve reached the point where the granularity of our physics simulations are inching closer and closer towards a virtual model of reality. As we move away from the key-frame animated models of objects breaking, and the content swap of yesteryear, towards full scale visual destruction throughout our virtual worlds, we continue to increase the dynamic ability of objects to break, bend, and collide in relation to our experiences of the physical world around us.
“It is just inherently fun break things, and the bigger the thing is the more fun it is to break. It can be a stress relief or just give a feeling of power and control. We worked extremely hard to create a virtual sand box for the player to create and destroy as they see fit, we just hope it gives them the same pure joy they had as a small child kicking over a tower of blocks. “ Eric Arnold, Senior Developer at Volition (CBS news)
In a continued attempt to shed light on some of the best examples of Technical Sound Design in the current generation, I’d like to call attention to several titles that have pushed the envelope when it comes to the art of ambience. The all encompassing experience of “being there” in a game, where the sense of place is encapsulated in the sound of the environment. Stepping beyond the background din of a given location, we’re moving forward towards the players ability to affect the sound of a space by their interaction with it. This can be as simple as turning off a machine that had been emitting a constant loop of activity, or as complex as scaling the dynamics of a crowd dependent on the current artificial intelligence activity in an area.
Despite leaving behind the memory restrictions of previous generation consoles, hearing a single looping ambience throughout a level or area within a game continues to be common – making any recurring distinct elements of the background clearly identifiable when repeated. While these backgrounds, well designed and teaming with character, still contain the potential to keep the player immersed in the game world, anyone who chooses this approach runs the risk of exposing the limitation this technique to the player. Several best practices have evolved and taken root to combat repetition and further lend a sense of randomness to the sound aspect of the game world.
In an article by Nick Peck back in 2004 entitled “Tips for Game Sound Designers”, a case for highlighting ambient elements which vary in time, duration, and position in order to “Generate 5.1 content without full bandwidth sources” was made. This included the idea of a subtly shifting background ambience with randomly placed elements as a solution to static looping soundscape, and presented a way out of the confinements of the locked loop. While likely that this presentation was not the first time a solution was defined, the practice of ambient creation using these methodologies perpetuates today in step with the advancements in available resources and the increased creativity of audio toolsets.
THE LAND OF THE LIVING
The world of Oblivion can be bustling with movement and life or devoid of presence, depending on the circumstances. The feeling of “aliveness” is in no small part shaped by the rich dynamic ambient textures that have been carefully orchestrated by the Bethesda Softworks sound team. Audio Designer Marc Lambert provided some background on their ambient system in a developer diary shortly before launch:
“The team has put together a truly stunning landscape, complete with day/night cycles and dynamic weather. Covering so much ground — literally, in this case — with full audio detail would require a systematic approach, and this is where I really got a lot of help from our programmers and the Elder Scrolls Construction Set [in order to] specify a set of sounds for a defined geographic region of the game, give them time restrictions as well as weather parameters.” – Marc Lambert
At the Game Developers Conference Audio Boot Camp in 2006 Scott Selfon, Senior Audio Specialist at Microsoft, peeled back the layers of the onion for game audio newbies and exposed those in attendance to a Project Gotham Racing 3 debug build which allowed audio designers to visualize the parameters of sound propagation emanating from various points on the vehicle. If you can imagine florescent green wire-framed cones jutting out of various orifices such as: mufflers, windows, and engine compartments all representing various sounds being played – all of this while playing the game – and I tell you it was like a cross between every favorite game and the transition sequence from Tron. For the uninitiated this was nothing less than a revelatory epiphany; that behind the curtain of retail games lurks the debug underbelly that every developer come to rely on in order to polish and dissect various systems. Needless to say, that moment left quite an impression on my impressionable mind.
As here I sit four years later having been lucky enough to participate in the undercover debug of several titles, mouth still agape at the possibility to visualize sound and sound properties as a way to understand what is “going on” sound-wise at a given moment, I continue to be fascinated by these environments created by hand for the sake of debug. While this aspect of game audio may continue to be a closely held secret of developers leveraging internal pipelines and processes, a few screens have escaped which show off various functionality.
One area that has been gaining ground since the early days of EAX on the PC platform, and more recently it’s omnipresence in audio middleware toolsets, is Reverb. With the ability to enhance the sounds playing back in the game with reverberant information from the surrounding space, you can effectively communicate to the player a truer approximation of “being there” and help to further immerse them in the game world. While we often take Reverb for granted in our everyday life as something that helps us position ourselves in a space (the cavernous echo of an airport, the openness of a forest), it is something that is continually giving us feedback on our surroundings, and thus a critical part of the way we experience the world.
While It has become standard practice to enable Reverb within a single game level and apply a single preset algorithm to a subset of the sound mix. Many developers have taken this a step further and created Reverb regions that will call different Reverb presets based on the area the player is currently located. This allows the Reverb to change based on predetermined locations using predefined Reverb settings. Furthermore, these presets have been extended to area’s outside of the player region so that sounds coming from a different region can use the region and settings of the sounds origin in order to get their Reverberant information. Each of these scenarios is valid in an industry where you must carefully balance all of your resources, and where features must play to the strengths of your game design.