Categories Menu

Posted by on Nov 26, 2009 | 0 comments

Rob Bridgett Special: Prototype [Exclusive Interview]

Here is the final interview with Rob Bridgett, about Prototype, talking about the sound of the cinematics, the mixing process, and more!

Designing Sound: First of all tell us something about what was your contribution on Prototype and what do you did for the sound of the game?

Rob Bridgett: In late 2007, the audio director for the project, Scott Morgan, asked if I could get involved and help out with the game mid-production. Cory Hawthorne was working as Technical Sound Designer and Implementer on the project which meant I had the opportunity to cover two areas on the game, one was as cinematics sound designer and implementer and the other was as game mixer. In terms of the first role, I was responsible for the sound effects, Foley, dialogue editing and mix of all the cut scenes in the game. The music was edited and supervised by the sound director for the project, Scott Morgan, and once all the components were assembled I would provide a mix automation pass before the finished file went into the game.

The second role, that of mixer, was one that came into play only during the post-production sound beta phase of the project’s development, in which Scott and I spend four weeks mixing the entire game in Radical’s 7.1 mix room. I always welcome the opportunity to help out on projects like this as it offers a break from being an audio director and allows a lot more time to concentrate more fully on one or two areas in particular.

DS : Can you tell us something about the process for the cinematics sound production?

RB: Sure. I’ll talk you through a typical set-up and process that I use on cinematics. The actual work on the cut-scenes starts fairly early in production. Once a script has been approved for production, placeholder dialogue is recorded here, for this we typically just use members of the team to read out the dialogue. We record this, edit it and give those files to the animation team so that they can begin their storyboarding process. They use these placeholder files to come up with very rough timings and shot lists which really gets the whole process kick started. Usually during this time, the actors are cast for the cinematics and they are recorded which eventually means that after a couple of months you have the real dialogue takes to work with and the animation team can start being more accurate with their timings.

Up until that stage, Scott Morgan, the game’s audio director had pretty much run the process, I was myself at this time finishing up the 50 cent game. I rolled onto the project in January 2008, at this point Scott had all the dialogue recorded and the cinematics team had some very rough avi files of the various cinematic scenes, so this was a good time to actually start building up the sound elements and structural foundations of the cinematics.

The first thing that I do is create a seperate Nuendo session for each scene. I typically do this from a cinematics template that I have created in Nuendo, which basically is an empty project with pre-assigned tracks and folder tracks.

  • Dialogue Folder Track containing six mono tracks all assigned to CENTRE only
  • SFX Folder Track containing five mono tracks all assigned to CENTRE only plus five stereo LR tracks
  • Foley Folder track containing ten mono tracks all assigned to CENTRE only
  • Ambience Folder Track containing four stereo tracks all panned LR and slightly LS RS
  • Music Folder Track containing four LR stereo tracks and two 5.1 music tracks
  • LFE folder track containing 4 mono tracks all assigned to LFE only.

These templates provide very quick structure to the whole project which is easy to navigate and expand upon. I recommend this for anyone getting into a new cinematics audio project as getting organized at the earliest stages like this saves tons of time later on.

As we had the dialogue ready and recorded, one of the first tasks for me to do was to go through all the scenes and ‘worldize’ the voices – rather than re-recording, I used Altiverb VST for each different room or physical space depicted in the scenes. The roomverb was panned mainly to the fronts (LCR) but also to the rears in order to give the sense of the listener being inside the room and surrounded by the reflections off the walls. This is quite a subtle effect, yet it adds a great deal of realism to dialogue that is recorded close-mic in an ADR room. Further to this some low-end was also rolled off the dialogue in order to simulate more of a distant mic / location sound feel. Having done this treatment on each of the cinematic scenes, it was time to move onto the second phase of building up the sound, adding roomtone and BG ambience.

For each scene and for each cut, I added roomtone that I had recorded here in the various spaces at Radical. There is a lot of AC in Radical and it makes for some useful roomtone source recordings, this meant that I had a ready to use library of roomtone beds which I could quickly edit into the scenes. For each camera cut in perspective the volumes of the roomtones were changed to ensure they corresponded to the listener position and point of view of the characters.

Scott Morgan had also been on location to New York to gather exterior ambience for the game, and it is these recordings that I was able to quickly use and edit together for any of the exterior scenes in the game. In fact, in the end I mainly relied on the actual background ambience file from the game for these ambience beds, as this would mean there was continuity between cut-scenes and game. In some of the scenes we also let the ambience present in the game continue throughout the cut-scene in order to maintain complete continuity from game to cut-scene and back to game again, for these instances, it just meant muting the ambience folder track on export.

With all the reverb and backgrounds built up, some effort could be put into sound effects design. An initial pass was done just concentrating on big fx moments like explosions or body impacts, also because the movies were low in detail at this point it could not be seen what materials or detail would be present in the final movies. All of the cut-scenes in the game used the in-game engine, so the full detail could only be seen at run-time in the game.

For the Foley in the cinematics we contracted Sharpe Sound here in Vancouver to cover the movements for all our cinematics scenes. The Foley was returned to us un-edited so the next phase of my work was to edit all the Foley and premix this so it sat well with the other effects and backgrounds. During all this work, the movies were constantly being iterated upon, receiving a lot of editing work and often large sequences would be re-cut and even deleted entirely. This meant lots of rounds of re-syncing dialogue and effects to the latest cuts of the movies.

By the time we reached Alpha and the work on the cut-scenes was locked down, I had two weeks in which to complete the final effects pass and mix on all the movies, matching the vo, music and effects levels for all of the movies. There were around 30 movies in total, around 45 minutes of in game rendered cut-scenes. Scott and I then reviewed all the cut scenes with the rest of the cinematics team and made notes of a few final tweaks before sign off.

DS: Did you record all the sound effects of the cinematics… what are the sources?

RB: We do have our own sound library here at Radical, in which we have archived many of our sources for other games such as Crash and Scarface. This library is invaluable in quickly getting sounds that I know will work. I think the key to good, fast work is actually knowing your library really well and being able to access exactly what you want quickly. The Foley, as I said, was all recorded fresh for this project, but the majority of the effects, the bodyfalls, punches and transformation sounds were all recorded here specifically for the project.

Perhaps the best example of some of the cinematic sound design we’ve been talking about is the ‘intro cinematic’ for the game which can be viewed online here…

DS: How about the mix for the game? Can you tell us something about the process involved for that?

RB: We have a proprietary run-time mixing system that enables us to do this attached to Mackie hardware control surfaces, the same one used on Scarface that I have talked about in depth elsewhere. For the mix we spent a total of four weeks, this time was broken down into a few different phases.

The first week of the mix was probably the most critical because it was where we set the overall output levels of the game. The first thing we did was to bring the whole output of all the channels down by around -6dB. This is because that when we started listening and mixing at reference listening level of 79dB, the game was incredibly loud. What tends to happen during development is that sounds are turned up and up so that you can hear them while you are populating the game with them, this approach is fine while in development, but at some point you have to reset the whole board and start from scratch again. This is what we did in the first few days of the mix. Getting the dialogue to a decent level and then ‘mixing around’ it is the approach we have been taking. So, once we’ve set our dialogue level, the music will be determined in relation to that, as with the effects and so on. Intelligibility of dialogue is really at the centre of most mixes to be honest, I still hear so many games today where you actually cannot audibly hear what is being said by certain characters because guns are being fired etc. This is perhaps one of the many areas where the styles of mixing in cinema is an influence.

Anyway, once the overall listening level is set, it is a matter of playing through the entire game, identifying key mix moments, mainly dialogue or mission related, but often tied to some in-game feature or effect like the thermal vision in prototype, for which we pitch down the ambience and add a low pass filter to many of the sounds in the game. Similarly for Infected Vision, where all sounds are given a muted treatment except for infected who remain clear and unprocessed during this mode. We also tweak every individual sound to make sure it is not too quiet or too loud. This is what takes the majority of time on a game mix, up to two weeks in this case, and all the time being aware of keeping the overall listening level tolerable for the player at home. Another major thing is to maintain the levels of sound, particularly dialogue and music, throughout cinematics and gameplay so that there isn’t a jarring disconnect between the two modes of exposition.

The final week of mixing we used to test how the game sounded on various mixdown configurations, such as stereo TV and all the various output configurations on the various consoles. The mix is tweaked at this point to ensure that users who listen on a tv set only are able to hear what they should be hearing, usually in the form of a few minor tweaks to music levels and dialogue levels but nothing too significant that it will adversely affect the surround mix.

DS: It’s a video game for PC, PS3 and Xbox 360… to work with sound… what platform you prefer?

RB: A tricky question, as a developer of multi-platform games I do have some opinions from a mixer’s point of view. The 360 is certainly the least complicated in terms of outputs, it supports Dolby Digital 5.1 and stereo (via optical and HDMI) as well as an analogue stereo output so it is kind of the easier to work with in terms of options and checking the mix. However the PS3 has discreet 7.1 support as well as a whole host of audio output options including DTS and PCM as well as Dolby Digital, which does make it more complex for checking and testing, but also provides more options for the user, particularly the higher-end HD audiophile user. As for the PC, this is potentially the most complicated of the platforms to mix and test for, because you can have any soundcard on the market connected to your PC which means we have to test on a wide variety of cards but can’t always be sure of what end users will be hearing. Having mixed the Xbox version of Prototype first, we then cloned all our mix settings and did a mix pass on the PS3 – fortunately the mix translated very well and I think we only made one or two very minor adjustments. The biggest difference being the difference between our two different compression codecs used: XMA on the 360 and MP3 on the PS3.

DS: I saw an interview with Mark Tuffy of DTS who said that Prototype was the first Xbox 360 game with 7.1 sound.. It’s about neural surround… what do you know about the implementation of that process in Prototype?

RB: I actually know very little about the implementation of this in the game from a technical point of view. What I do know is that it is running the DTS Neural surround code on the Xbox360 (there is an option in the sound menu in the game to turn this on or off) and that it is outputting a 7.1 mix of the game when listening through a receiver with neural enabled. The receiver then basically decodes the extra two back surround channels from the Left and Right surround channels of the regular 5.1 outputs. We mixed the game while monitoring in 7.1 on the Xbox, while always checking how the sound folded down to both 5.1 Dolby Digital and Stereo. The game also runs in 7.1 PCM on the PS3.

DS: In terms of interactive mixing, what aspects would you highlight as most important in the mixing and implementation of interactive audio on Prototype?

RB: Some of the action gets pretty intense pretty quickly in this game. Strike teams are sent into heavily infected zones, the amount of sound playing back is huge and there is essentially a sound for every event and collision occurring, so the game needs to be able to deal with this. It isn’t really part of the mixing system but it plays into it, there are limits specified in our engine on the amount of certain types of sounds that can be played back at any one time, such as dialogue or gun shots, and there is a priority system which gives precedence to some sounds over others. To add to this, the mixer system allows us to finesse certain events such as the shot from the thermobaric tank, whereby we duck down most other sounds to foreground this one huge tank weapon ejection to make it seem like it is much louder than it really is!

DS: I think one of the best features of Prototype sound are the ambiences, there are a lot of those and too much recording of many places… Why does the sound team gave much importance to the ambiences? What is the importance of these in Prototype?

RB: You know, the audio director Scott, has written a superb and detailed article on the ambiences in prototype here that can best answer your question… I really recommend it as there is quite a unique approach to ambience in this game, which I agree, works really well in conveying the feeling of New York.

Prototype Official Website



  1. Monofónicos » Blog Archive » Bliptronic 5000, el hermano perdido del Tenori-On - [...] Bliptronic 5000 Recomienda Monofónicos: tweetmeme_url = [...]
  2. Rod Bridgett in making sound for Prototype « PolyPink - [...] just read Rob Bridgett Special: Prototype [Exclusive Interview] by Miguel Isaza on Rob talks about his roles in…

Post a Reply

Your email address will not be published. Required fields are marked *