One area that has been gaining ground since the early days of EAX on the PC platform, and more recently it’s omnipresence in audio middleware toolsets, is Reverb. With the ability to enhance the sounds playing back in the game with reverberant information from the surrounding space, you can effectively communicate to the player a truer approximation of “being there” and help to further immerse them in the game world. While we often take Reverb for granted in our everyday life as something that helps us position ourselves in a space (the cavernous echo of an airport, the openness of a forest), it is something that is continually giving us feedback on our surroundings, and thus a critical part of the way we experience the world.
While It has become standard practice to enable Reverb within a single game level and apply a single preset algorithm to a subset of the sound mix. Many developers have taken this a step further and created Reverb regions that will call different Reverb presets based on the area the player is currently located. This allows the Reverb to change based on predetermined locations using predefined Reverb settings. Furthermore, these presets have been extended to area’s outside of the player region so that sounds coming from a different region can use the region and settings of the sounds origin in order to get their Reverberant information. Each of these scenarios is valid in an industry where you must carefully balance all of your resources, and where features must play to the strengths of your game design.
While preset Reverb and Reverb Regions have become a standard, and are a welcome addition to a Sound Designers toolbox, these techniques ignore the inherent physical characteristics of a space and are unable to dynamically react to reflections from these surfaces relative to the player. In order to bring Reverb closer to realtime expectations, level geometry could be referenced based on the originating position of a sound within a space, and that data could then be used to apply appropriate reflections.
One way of accomplishing this in the current generation of consoles is through the use of Ray Traced Convolution Reverb, a technique which snuck in under our noses in the Xbox 360 launch title Crackdown, from Realtime Worlds. ”When we heard the results of our complex Reverb/Reflections/Convolution or “Audio-Shader” system in Crackdown, we knew that we could make our gunfights sound like that, only in real-time! Because we are simulating true reflections on every 3D voice in the game, with the right content we could immerse the player in a way never before heard.”-Raymond Usher (Team Xbox)
So, what is realtime Reverb using ray tracing and convolution on a per-voice implementation?
Simply put, it is the idea that every sound that happens within the game world has spatially correct reverberation reflections applied to it. Let’s dig in a bit more…
Here’s a quick definition of Ray Tracing as it applies to physics calculation:
“In physics, ray tracing is a method for calculating the path of waves or particles through a system with regions of varying propagation velocity, absorption characteristics, and reflecting surfaces. Under these circumstances, wavefronts may bend, change direction, or reflect off surfaces, complicating analysis. Ray tracing solves the problem by repeatedly advancing idealized narrow beams called rays through the medium by discrete amounts. Simple problems can be analyzed by propagating a few rays using simple mathematics. More detailed analysis can be performed by using a computer to propagate many rays.” –Wikipedia
On the other side of the coin you have the concept of convolution:
“In audio signal processing, convolution Reverb is a process for digitally simulating the reverberation of a physical or virtual space. It is based on the mathematical convolution operation, and uses a pre-recorded audio sample of the impulse response of the space being modeled. To apply the reverberation effect, the impulse-response recording is first stored in a digital signal-processing system. This is then convolved with the incoming audio signal to be processed.” –Wikipedia
So what you end up with is a pre-recorded impulse response of a space (room, cave, outdoor) being modified (or convoluted) by the Ray Traced calculations of the surrounding physical spaces. The distance of a originating sound from the level geometry defined in the Ray Traced calculation gives the reflections their parameters which effect the Impulse Response sample of the space type.
You can hear the results of their effort in every gunshot, explosion, physics object, and vehicle as you travel through the concrete jungle of Pacific City. As the player walks around the city, a passing car is with it’s radio blaring can be heard positionally from the open window in addition to it’s reflection off of a nearby wall; meanwhile footsteps & gunshots are continuously being reverberated realistically depending on the changing environmental characteristics. What this allows the sound to communicate is a greater sense of location and dynamics of the sound at the time it is triggered.
While the result may be less impressive than the complexity of the implementation, the additive effect that it has on the multitude of sounds happening throughout a game can bear a significant effect on adding realism to the environment. It’s also worth noting that Crackdown2 will be hitting shelves soon from Ruffian Games, along with the Realtime Worlds’ new MMO All Points Bulletin. No word yet on whether either of these will continue to push realtime Reverb but all ears will be on them for advancing the potential of this technique.
With a future for convolution Reverb implied in a recent press release for Audiokinetic’s Wwise toolset, and the brief outline of it’s use in Radical’s recent open world game Prototype, let’s hope the idea of Realtime Reverb in some way, shape, or form continues to play an integral part in the next steps towards runtime spatialization.
See video example of Realtime Reverb Debug @ 4:15
Crackdown – Sound Study Footsteps