Charles Deenen has published a useful guide on his site explaining how to setup Dropbox for sharing Pro Tools sessions quickly between a group of people, in the cloud.
Dropbox allows you to work “virtually” in a group with Protools, sharing sessions almost instantly. However there are a few rules to follow during the workflow, to make this process smooth. The following is assumed:
- You’re working with Protools 9.x
- You work with a library program like Soundminer / Netmix etc.
- You work with other people in a group, and need to open up their sessions
- quickly from various locations.
- Each editor / mixer has the necessary plugins to listen to the session (if needed)
Something tells me Erik “Transform-ed” his lunch break into a podcast session for this interview with Home Theater Geeks. Good Stuff.
A friend recently asked me what three plug-ins I would take with me on to the proverbial “desert island.” Assuming I also packed a Macbook, a copy of Logic and an iLok updater, I told him my three plug-ins would be:
- Serato Pitch ‘n Time
- The GRM Tools
I recently picked up the complete GRM Tools version 3 “Evolution” bundle, and here’s my experience, along with everything else I know about the GRM Tools.
This month’s Sound Design Challenge is now live, and it’s a game audio based challenge designed for even beginners to handle. It’s a great learning opportunity if you’re looking to get your feet wet. There’s even an extended deadline this time around; a full two weeks. We have plans to migrate the Sound Design Challenge over here to Designing Sound in the future, but until then…
Well, here we are. This month’s Challenge took a significant amount of work to pull together, but I’m pleased to offer this game audio oriented challenge. To start off, I’d first like to take some time to thank Damian Kastbauer and Jeff Seamster for helping to design this challenge for the broadest participation possible (good for beginners too), and to Richie Nieto for providing templates for everyone to mess with. We also need to thank Ric Viers and BlastwaveFX for providing the sound effects you’ll all be using in this challenge. I’m going to take a moment to shamelessly plug their effects library. We’ve got the Blastdrive at work, and it’s an awesome collection. Head over to the Blastwave FX site and check out some of the many collections they sell, including the all new Sonopedia 2.0. I’d wax on about their audio goodness that are their effects, but you get to use some of them in this challenge yourself.
Now, the Challenge…You’re going to be using Wwise, the basic templates we provide, and the Blastwave FX sound files to create a background ambience of a beach that progresses from a nice calm day to a raging thunderstorm.
UPDATE: Link was broken, but it is now fixed. Sorry for any confusion. Full details and materials for the challenge are available at www.DynamicInterference.com
Bashandslash.com have published a fantastic sixty minute podcast with Stefan Strandberg, DICE audio director behind the Battlefield series. He talks about his design philosophy and looking at the bigger picture when designing and auralizing in context. This is a must listen even for those with limited interest in game audio.
The team that built the soundscape in Battlefield Bad Company 2 raked in awards for their efforts. The 2010 BAFTA For Use of Audio, GameSpot’s Best Sound Design award for 2010 andThe Game Audio Network Guild Award for Sound Design Of The Year 2011 are examples of such laurels.
Stefan brings imagination, passion and analytical skill to his craft and you can hear the results in every DICE game he has worked on.
His sound crafting made BFBC2 one of the best sounding FPS games ever and the recently released trailers indicate that BF3 will continue that excellence.
The information Stefan covers enlightens and entertains and I for one will never take in-game sounds for granted again.
Listen to it here.
[Written by Rodney Gates for Designing Sound]
All SKUs are created equal…right?
It is a common question every time a new game is released on the console platforms: “I have both systems. Which version should I buy?”
In a perfect world, any game created for the consoles would run just perfectly on them, without any performance edge leaning towards one machine or the other. Perfect frame rate, glitch free in every way and beautiful experiences for all!
But of course we do not live in that world. Though things have improved with the current generation of consoles overall, it is still often said that the smart choice is to find out which console the game was developed for first and you’ll most likely have your answer.
Darkwatch: Curse of the…middleware?
“Darkwatch” was a great game to work on. Blending the Wild West with vampires seemed like a perfect fit for an FPS back when the PlayStation 2 and original Xbox ruled the living room. Both machines were great in their own rite, but were quite different in their design. So naturally, they had very different requirements for getting sound into the game.
Unlike today’s market where the Xbox 360 has had a significant head start on the PlayStation 3, the inverse was true for the previous console generation. With this being the case, when Sammy Studios began development with “Darkwatch”, their initial eye was on the PS2. For authoring sound in the game, we were using Sony’s proprietary audio tool called SCREAM.
Game Informer has published a video interview with Bioware’s audio lead Rob Blake on the sound and weapons design for Mass Effect 3. He talks about how they have collaborated with other EA studios to share ideas and upgrade the sounds from Mass Effect 2 to create an immersive environment.
In addition to building the aural landscape of an entire galaxy’s locations and species, Bioware has to ensure that the combat in Mass Effect 3 stacks up amongst the best modern shooters. We sat down with Bioware’s audio lead Rob Blake to talk about the lessons that the team has learned since the release of Mass Effect 2 and how important the sound of a gun can be to immersing a player in the Mass Effect universe.
Watch the video here.
[Written by Rodney Gates for Designing Sound]
Nothing can be more peaceful than sitting quietly for several minutes at a time in a unique location recording the living world in front of you. Your eyes and ears are alert and observant of everything that’s happening as the recorder captures it all. There is a calm that comes over you and for the first time in a while, you feel relaxed and contemplative.
Then an airplane shows up. Or a chainsaw. Or a leaf blower, car horn, traffic, tractor, weed-eater, or any other man-made contraption that strives to ruin your recording. You find yourself recording much longer than you would normally need to in order to have enough material to edit out all of these man-made sounds and end up with a seamless representation. Regular, non-audio people don’t realize just how noisy the world around them is until they try to do something like this. Your brain may filter out these distractions without you realizing it, but the microphones don’t lie.
On the flip side, my personal field recording sessions can also leave the “pristine” natural world behind to purposefully seek out a particular noise source, or happen upon one. I always bring the recorders for unique settings like a cruise ship, or a Civil War re-enactment. Maybe it’s foot traffic in the reverberant Mayo Clinic lobby or a busy city street during an unexpected parade. You just never know when recordings of these kinds of things will come in handy, and that is half of the joy of doing it. They may sit dormant for several years until just the right circumstance comes along and you can pull them out of your sleeve.
The following is an exclusive interview with Supervising Sound editor and Sound Designer Tom Bellfort about his work on “Source Code”.
DS: How did you convey the feeling of confusion through the sounds that Jake Gyllenhaal’s character Captain Colter Stevens hears as he wakes up the first time in Source Code?
TB: The first concept I experimented with was to sonically bridge Colter’s helicopter crash (which you subsequently see in reel 4 although this is hinted at in the very first train scene in reel 1). The very first sound you hear when Colter emerges in pod #1 is an eerie high pitch sound meant to convey hearing loss at the helicopter crash. This is followed by a muted and pitched down helicopter rotor fx. Colter does not know, if he is at the crash site or not. The rotor fx slowly becomes a heart beat and the High pitch eerie sound becomes squeaky creaky medical equipment sounds, mri, metal lung machines. In addition I added high pitched subway train brake squeaks, also manipulated in Altiverb and pitch shift. The attempt was to create a very subtle world which hoovers and is delineated from the intersection of the train, the helicopter crash and and Colter’s life and death situation.
DS: What continuity was there in the sounds that transitioned in and out of source code? Did they evolve as the film progressed?
TB: Each time Colter transitions into the source code (out of the train) the images we see are by and large variations of violent agitated movement, Michelle Monaghan and the Chicago “Bean”. The duration and sequence of each image was always different from transition to transition. And so it was hard to establish a leitmotif if you will to this area. It was always changing. There was a common sound in all for Michelle and the “bean” put I think because these transitions were so rapid, it is hard to distinguish these.
As to the transitions out of the source code back to the train; these, by and large stayed the same and this was the breaking apart of Colter’s being: The idea was to produce a shattering expulsion type of signature effect followed by a gliding effect over the lake and finally into the train; Colter jolted back into that reality. A combination of rockets, glass and tonal elements were used for the expulsion. Once I was satisfied with that design, I duplicated each element and then created an aliasing effect by removing a quarter to half a frame for the duration of each effect and then recombining these to create a staccato feeling of Colter breaking apart to be re assembled back into the train. Once gliding over the lake, processed brake squeals and train horns were used to get Colter back into the train. To jolt Colter back into that reality I used train bys and or bell bys to “wake” him back up.
Tim Prebble has released Blow Holes on HISS and a ROAR, opening a new catalog of ambience libraries.
Ambiences play a crucial role in every film: literally, emotionally and physically they define the world that the film exists in. Accordingly we endeavor to provide characterful multichannel recordings of dramatically interesting locations.
The ocean has an infinite range of moods, but when the power of an incoming tide becomes constricted it can lead to some awe-inspiring sounds. This library was recorded on a Sound Devices 744 recorder using a Sanken CSS5 stereo mic along with Sennheiser MKH70 and MKH816 mics. Four locations were chosen specifically for their unique sonic properties:
- Punakaiki Blow Holes – West Coast, South Island, New Zealand
- Alofaaga Blow Holes – Taga, Savai’i, Samoa
- CastlePoint The Gap – East Coast, North Island, New Zealand
- Muriwai The Gap – West Coast, North Island, New Zealand
Blow Holes Library | 24bit 96kHz | 1.52GB download | 2.17GB uncompressed
Here’s a Q&A with Tim, talking about the library and his projects.