Guest contribution by Chris Didlick
We are Box of Toys Audio, a music and sound design company with studios in London and Stockholm. We provide audio services for commercials, branding, trailers and all manner of projects. With the ongoing innovation and expansion of digital media we are sometimes offered new and uncharted avenues for creativity, which is why when we were asked to work on the new Madefire Motion Book platform we embraced the challenge readily.
Madefire is an iOS app that has been optimised for the iPad and iPhone, emulating the traditional graphic novel format with the addition of motion, interactivity and audio. What’s more, Madefire is also releasing, in phases, free development tools that can be used by independent artists to create and publish their own stories on the platform. With Moving Brands CEO Ben Wolstenholme and comic book legends Dave Gibbons and Liam Sharp involved in the creation of the app, we jumped at the chance to create the audio for the first three in-house story releases, namely “Treatment”, “Captain Stone is Missing…” and “Mono”. Not only were we creating the audio for the narratives, we were also constructing an SFX library for use within the development tool. It was therefore important that the audio enhanced each story while being effective for future titles.Read More
[Continuing with the procedural audio series...]
Andy Farnell – a familiar name in computer audio – is a computer scientist, sound designer, author and a pioneer in the field of procedural audio. He is a visiting professor at several European Universities and a consultant to game and audio technology companies. His book, ‘Designing Sound‘, is a bible for procedural sound and should be on your bookshelf, if it isn’t already!
He was very kind to find time in his busy schedule when I visited London, and we talked about what procedural audio is, where it stands now and what it can be in the future. This article is a transcription of our conversation, which he was again very kind to edit along with me. It was no easy task because there was so much good content!
Thank you Andy!
DS: Where does Procedural Audio stand now? Would you say it is comparable to where CGI was in the 70s/80s, when computers weren’t powerful enough?
Andy: That is a central mythology – that the computers aren’t powerful enough to do it. This is often brought out as a straw man argument against Procedural Audio by skeptics. One of the things I did with my 2005 demo was to make all of the sounds (they weren’t very high in quality) that you would need for a first person shooter game – fire, water, wind, rain, some animals, some footsteps, some guns, some vehicles. This was 2005 and I had them all running on a 533 MHz processor generating a realistic-ish sort of soundscape to prove that if you had 1GHz processor and if you used half of it for the graphics then it would be quite possible to synthesise all the sounds using the remainder. Six years after doing that people would still come to me with this straw man argument, they would say, “You know Andy, we love this Procedural Audio stuff but there’s just not enough CPU available”. But we now have two to the five times more CPU than when I did my 2005 proof-of-concept demo. So, what’s behind that? Why are they saying that? It’s not true. What happens is the internal politics of resources. The requirements always expand to fit the resources available. The game worlds get bigger and bigger and the graphics get more and more demanding. The audio team will always have the least amount of CPU allocated to them as an afterthought, because in the current structural model of production sound is “post production”, and no body wants to commit to giving audio that much CPU bandwidth. I feel that is the real reason behind the argument. You often get these straw man arguments that enter in to a culture and just get recycled. People know that there is an argument and it comes to their tongue very quickly and they say “Yes we could do it but there is not enough CPU”. With the left over CPU on a modern games console I could provide you great procedural sound. On an eight core architecture, we would need one or two CPU cores to give procedural sound. Even more interestingly is what happens when we run models in GPU, and many Procedural Audio models are inherently parallelisable. So, yes, Procedural Audio is somewhere in that era before the Tron movie, or before the Pixar CGI revolution, its possible, but not yet seen as viable, perhaps the shift is too painful for big companies to make.
Today we’ll be touching on the interactive side with this months Featured Sound Designer David Sonnenschein regarding his Sonic Strategies: Animal Sounds Memory Game.
This is one of many Sound Games to be created by Sonnenschein that open ears and minds to hearing the world in new ways. Focusing on the neurobiology of audiovisual input and memory, the game draws upon film and music theory, and provides one of the cornerstones for creating story, character and emotion with audio. It uses the memory flip-card model as one example of gameplay.
This game challenges the player to move from visual to audio awareness and memory in four variations that gradually bridge one sensory input (sight) to another (hearing). See how fast you can complete each level, and how many cards you need to turn over each time. How does your performance compare when aided by sight and/or hearing?
Have fun! See if your friends have the same or different experience. This is the first of many Sound Games to come that will open your ears and mind to hearing the world in new ways and learning to create story, character and emotion with audio.
What follows is an discussion between myself (DK) and David Sonnenschein (DS) on the topic of sound interactivity and the work he is doing to further our understanding of how we related to the world around us with sound.Read More
Here is the final interview with Rob Bridgett, about Prototype, talking about the sound of the cinematics, the mixing process, and more!
Designing Sound: First of all tell us something about what was your contribution on Prototype and what do you did for the sound of the game?
Rob Bridgett: In late 2007, the audio director for the project, Scott Morgan, asked if I could get involved and help out with the game mid-production. Cory Hawthorne was working as Technical Sound Designer and Implementer on the project which meant I had the opportunity to cover two areas on the game, one was as cinematics sound designer and implementer and the other was as game mixer. In terms of the first role, I was responsible for the sound effects, Foley, dialogue editing and mix of all the cut scenes in the game. The music was edited and supervised by the sound director for the project, Scott Morgan, and once all the components were assembled I would provide a mix automation pass before the finished file went into the game.
The second role, that of mixer, was one that came into play only during the post-production sound beta phase of the project’s development, in which Scott and I spend four weeks mixing the entire game in Radical’s 7.1 mix room. I always welcome the opportunity to help out on projects like this as it offers a break from being an audio director and allows a lot more time to concentrate more fully on one or two areas in particular.Read More