It takes a strong game to weave so seamlessly the combination of art contained within. Limbo was a game that so totally embodied itself that it found its way atop most “Best of…” lists the year of it’s release on XBLA. With the game properly ported and recently released on PC and PSN, DesigningSound.org took some time to catch up with Martin Stig Andersen.
When I saw Martin speak this spring at GDC I was struck by how well formed his concept of sound for Limbo was, not only that but how his formative years seemed completely in support of hit contributions to the soundscape. If you have played through even a section of the game you will know that this could be no small feat, as it’s not every sound designer that could inexorably link the flickering black and white images to abstract impressions of sound.
This is a story that follows a complete trajectory. From his days in University learning and experimenting with electroacoustic music, acousmatic music and soundscapes throughout the development and application of interactive audio gestures which help bring to life the action on screen.
Read on for further insight…
DK: How did your education prepare you for interactive media?
MSA: My compositional studies at conservatory and university were very much biased towards the artistic side. At City University in London where I studied electroacoustic composition the general agenda was to discuss “whys” rather than “hows”, for example why a specific sound or sound structure evokes certain associations rather than how it was created. So, as far as technology is concerned I’m pretty much self-taught. On the aesthetic side, at university we dealt with all kinds of electroacoustic music, including interactive music, combining live performance (voice or instrument) and electronics. However unlike games in which interaction happens directly between the player and the game, in interactive concert music the interaction is something happening between the performer(s) on stage and an interactive playback system. In such situation the listener may not at all grasp the interactivity of a composition, which of course posed a lot of questions, like whether or not it’s important for the audience to actually experience such interactivity. Personally I’ve never composed a piece that I wanted to be perceived as interactive per se. Even with Limbo which is natively interactive qua being a video-game, I’d say I did my best to avoid drawing the players’ awareness towards the idea of interactivity. This is because the game isn’t really interactive beyond a basic moment-to-moment level where the player maneuvers the protagonist and interacts with the physics of the environment, while on the larger scale the game remains a fixed, linear experience.
In regards to interactive media I guess the most important skill I took with me from university is what you could call temporal awareness. By studying the perception of form and structure in music and audiovisuals I acquired an understanding of the various temporalities inhabiting not only sound but also visuals, and learned how to match and contrast such temporalities creatively in order to make sound contribute to the overall flow or even structure of an audiovisual experience. Working with Limbo I identified various temporalities inhabiting different types of gameplay, and was able to respond to those while at the same time building larger scale sound structures encompassing several gameplay moments each featuring different temporalities. I even consider there to be a tangible global sound structure contributing to the wholeness of Limbo, although a few people have actually commented on that. Studying acousmatic music and soundscape composition in general have also served as an important inspiration in my audiovisual and interactive work, although the concepts associated with these genres are not directly applicable to audiovisual media.
DK: When did you first become aware of Acousmatic principles and Soundscapes and their creation?
MSA: I can’t remember exactly when but sometime during my studies at conservatory in Denmark. Yet in Denmark, at least back then, there were a lot of misconceptions regarding those terms, and it wasn’t before joining City University in London in 2001 that I came to understand the essence of the ideas. Getting acquainted to the aesthetic foundation of acousmatic composition was undoubtedly the biggest revelation in my musical carrier, and I somewhat felt like being ported 25 years forward in time. It’s hard to think about the time before that, and I wish I’d discovered the ideas earlier on, or that I’d have had the imagination to come up with them myself! I remember when playing the piano in my childhood I had this abstract inner vision of pulling the keys on the keyboard apart, and entering the sound, like I wanted to be inside sound itself. Today I haven’t been using a keyboard for over ten years, and I’ve learned to form sound as if it was a piece of clay. Prior to joining City University I’d already learned many of the tools that are often associated with electroacoustic composition, such as MaxMSP and AudioSculpt, and I also did compose stuff that was acousmatic in nature, I just didn’t have the bigger perspective at that time.
DK: What were some of the inspirations taken from these idea’s that you applied?
MSA: What I found interesting in relation to audiovisual media was that soundscape and acousmatic music together embraces the entire continuum between representational and abstract sound, in this way dismissing the traditional dividing line between sound design and music. By deploying such approaches in audiovisual work you can make seamless transitions between realism and abstraction, and make sound travel smoothly between the diegetic and non-diegetic space of a represented world. For me it has a much bigger psychological impact when you turn a naturalistic soundscape into abstraction by making your sound effects play as “music” rather than adding some traditional background music. Moreover, making your “music” emerge from the environment is likely to make the audience more forgiving towards it since they’ll accept it as stemming, however abstractly, from the environment. This feature attains special relevance in video games where the player may get stuck from time to time and the audio elements need to be flexible in terms of duration. It’s important to note that although acousmatic composition does have certain potentials in relation to audiovisual work it doesn’t really make sense to use the term “acousmatic” in this context. Not at least because in the context of film the term has merely come to denote diegetic sounds that are off screen.
DK: Had you worked with interactive audio prior to games?
MSA: I did a few projects utilizing interactive audio, including electroacoustic theater performances where I created sound systems that reacted to noises made by the actors on stage, and pieces mixing instruments and interactive audio. Yet I haven’t been creating much that the listener could interact with directly. I think this comes down to the point that what I like to do with interactive media isn’t interactive per se but still more of a structured experience, and so, allowing the listeners to interact directly with a composition would probably give them the impression that the music is supposed to be interactive, and that they should be able to influence the course of the music itself. Video games, at least accounting those that are rule-based, are great in that the player doesn’t expect them to be interactive per se but contrarily accepts there to be an authored, linear path through them. Game design can communicate clearly to the player what to expect in terms of interactivity, and stay true to that. I found that more difficult outside the field of games where the lack of rules and conventions often cause the audience to be preoccupied with figuring out and interpreting the interactive system rather than engaging themselves in the experience.
DK: Was Limbo your first commercial game?
MSA: Yes, Limbo was my first game, although I consider it more as an artistic venture.
DK: While the outcome has a well defined audio aesthetic, how clear was the direction you were given for creating the soundscape for the game?
MSA: I’ve been lucky to work with a game director who’s as sensitive to sound as any other aspect of a game. Before I joined the production, which was rather late, the game director Arnt Jensen had already been thinking a lot about sound. For example, he wanted to give prominence to the boy’s Foley sounds, to emphasize silence and subtlety in the ambiences, and to avoid music that would manipulate the emotions of the player. On a more general level he wanted the sound to suggest a distanced, enveloped, and secret world. Those ideas corresponded very much to my own from watching the original concept trailer, and eventually Jensen entrusted me the task of developing the entire sound-world for the game. Based on mutual trust I think we managed to form a criteria of success where both of us were fully content with the sound.
DK: How important do you feel the sound processing involved with creating a sound is in regards to it’s final outcome?
MSA: Sound processing was essential in defining the sound of Limbo. Inspired by the bleak and grainy b/w imagery I ventured into using obsolete analogue equipment, and by running all sounds through old wire-recorders and tape-recorders they came to echo a distant past. Even sounds that were originally heavily processed using contemporary digital techniques such as time stretch and phase vocoding acquired this quality, as the analogue transformation helped to eliminate the digital byproducts of such processes. Using analogue equipment also enhanced the dynamic contrast between the sounds since louder sounds would naturally get more distorted than softer ones. The ear is really sensitive to such nuances, and interprets the more distorted sounds as being louder than the lesser distorted ones. Accordingly I never normalize sounds but always keep them at the level that I imagine is the maximum they will play back in the game. This allows me to enhance the dynamic contrast by running the sounds through various analogue equipment with fixed settings so that the result varies in accordance with the sounds’ amplitudes, and with tools such as Wavelab I can even batch process loads of sounds through such equipment. I find the approach is very suitable for games where you don’t have the fixed timeline of linear media but rather have to adapt to the pace of each individual player and correspondingly I would say to be very cautious with dynamics. Here the analogue sound processing helped specific sounds to be loud without actually being loud in this way minimizing the risk of listening fatigue should the player get stuck in the louder areas of the game. For example due to excessive analogue distortion, the foundry in Limbo sounds almost like a continuous explosion although the levels are actually quite soft. What I also discovered was that running all the sound through the old analogue equipment really helped the sounds sit together in the final mix. It worked like glue.
DK: How did the implementation pipeline affect the way you designed sounds?
MSA: From the beginning I wanted to use the game’s sound system as a compositional tool. I didn’t like the idea of working in the linear environment of a sequencer, and then having to squeeze the result into the nonlinear environment of the game. In other words, I wanted to work from within the game and achieve synergy between the creation and implementation of sound. For example I would try out the sound in the game between each iteration of sound processing, and even get ideas to sounds inspired by implementation work. This approach essentially enabled me to work empirically as I usually do when composing electroacoustic music, and was very much in line with Playdead’s overall ambition of making decisions based on percept rather than concept. Besides working in Wwise I also did some of the implementation in the game engine which primarily involved setting up the triggers and RTPC controllers that conduct the ambience and overall mix. Again, this really helped me to try things out quickly, making adjustments etc., or, in other words, to work with core implementation and sound design simultaneously. Working with implementation side also enabled me to make appropriate sound recordings as I would have an idea about how the sounds would eventually be integrated in the game. For more advanced implementation requiring in depth knowledge about the game engine a team member took care of this on the fly which was crucial to the final result.
DK: Coming from background in sound with access and exposure to tools such as PureData, Max/MSP, Supercollider, etc…How enabled did you feel, in contrast, using the Wwise toolset with the game providing the interactive component?
MSA: For Limbo Wwise pretty much had what I needed. Yet in contrast to a tool such as Max/MSP Wwise is almost exclusively reliant on externally generated data meaning that it doesn’t allow for much internal generation and modification of data. So if you want to generate specific data to control the sound or to modify data sent from the game you’ll need external tools. For example, in Limbo I needed to temporally smooth out some RTPC values that I received from the game, and had to have a programmer to create a tool for this.
DK: What software, processes, and recording techniques did you use to help define the sound of Limbo?
MSA: Besides the antique recorders and filters I also used some state of the art software tools many of which are based on Fast Furrier Transform (FFT). FFT essentially translates a sound’s waveform into the frequency domain giving you direct access to modify the sound’s spectrum before making the inverse transformation back to the time domain. Having access to the spectrum of a sound and its movement in time allows for all kinds of sound surgery such as filtering out partials, freezing sound or multiplying spectra. IRCAM’s Audiosculpt is a great program for doing those kind of things but I also use an increasing amount of mainstream plug-ins that runs FFT under the hood. Another related technique that I used a lot in Limbo is convolution. Not convolution reverb, but more like filters where you convolve the spectrum of one sound with that of another, a feature available in PEAK Pro for example. Both techniques are quite demanding in terms of tweaking and handling the outcome, for example, to get rid of the associated side effects. I really dislike when you can hear trances of specific sound processing or recognize the techniques or actual software involved. Often the spectral outcome is unevenly balanced with excessive fluctuation in different parts of the spectrum, and I use quite advanced processors such as dynamic EQs, spectral interpolators and restoration tools to tame the results, and to extract the parts that I like. The processes were great for creating the diffuse components of the ambiences in Limbo.
DK: Can you talk a bit about the mix, specifically in relation to the State Based system you mentioned in your talk.
MSA: In order to make the sound in Limbo come alive I opted for an active mixing approach where I would continuously make decisions about the mix and change it in accordance with dramatical interpretations of the game. We took inspiration from films where sound helps to focus the attention of the audience by emphasizing important actions while ignoring those of lesser importance. Accordingly in Limbo, prominence is given to approaching obstacles and environments even before they’re revealed visually, and as you pass them they may be silenced entirely although they may still be in the frame thereby revealing new obstacles or environments to come. Besides contributing to the foreboding atmosphere of Limbo, such mixing minimizes the risk of making sounds become annoying to the player, simply because the sounds only play as long as they’re important to the actual game. In some cases I even use state based mixing to make swift shifts between entire soundscapes, for instance from a soft dreamscape to brutal realism, influencing also the levels of the protagonist’s Foley sounds. The Foley sounds of the boy which are more or less limited to footstep sounds were also subjected to some quite sophisticated passive mixing strategies. For example, the footstep sounds start attenuating gradually after the boy has been moving continuously for a shorter period of time, and regain in amplitude when he’s standing still. This helps to establish the boy as being relatively loud in the mix without actually having to be loud all the time, and serves to give the impression that the surroundings are very soft. Another example is related to ground materials. When the boy has been running on a specific kind of ground material for some time the footstep sounds start attenuating until he steps onto a new material whereby the amplitude gets a small boost. Using several such strategies the amplitude of the boy’s footsteps varies about 15dB in total, without taking distance based attenuation and active mixing into account, and the result is a continuous variation of footstep levels which I guess most players won’t even notice but which nevertheless brings life to the boy.
A special thanks to Martin for taking the time to share his experiences with the community!
Zack says
Very cool feature.
Only 45 minutes to go before the steam release..!
Qasim Assad says
Fast Furrier Transform (FFT) is
Fast Fourier Transform (FFT)