The fourth annual FilmSoundHamburg got under way in Hamburg on Sunday evening – an event that will bring together enthusiasts from the worlds of sound design, film composition and game music for five days of workshops and seminars and masterclasses.
Among the highlights will be four separate masterclasses given by Tim Nielsen of Skywalker Sound (Maleficient, Lord of the the Rings, John Carter), and composers Olivier Deriviére (Assassin’s Creed IV, Remember Me), Javier Navarrete (Pan’s Labyrinth, Mirrors, Hemmingway & Gellhorn) and Lisle Moore (who has composed trailer music for Maleficient and the last three FIFA World Cups). In addition, a number of workshops and seminars will also be taking place.
FilmSoundHamburg takes place from June 29th until July 4th in Hamburg, Germany. Some places are still available so check the website for the full programme and price list.
FilmSoundHamburg programme of events
FilmSoundHamburg on Facebook
There’s been more VR content made in the past year than the last twenty combined, thanks to the emergence of the Oculus Rift, Sony’s Project Morpheus and other such virtual reality (VR) devices. There’s lots of innovation happening on the visual front, including new methods of gameplay, narrative structure and visual design. The obvious question: what’s happening on the audio front?
There are discussions about audio for VR across the Internet but most of them are related to the technology behind binaural/3D positional audio. There also is lots of academic research related to auditory interfaces spanning the past couple of decades. A search on Google Scholar will lead to lots of good material worth reading. [This post is focussed on first person game like environments, where audio-visual realism and synchronisation is necessary]
Over the past two and a half years I have been involved with Two Big Ears where we’ve been developing 3Dception, a very very efficient real-time and easy to use binaural audio engine that works everywhere (you can head to the website to watch and download demos). During this period I’ve had the opportunity to design sound for about fourteen augmented and virtual reality projects including games, interfaces for the visually impaired and audio led tourism apps. My experience so far, especially when working with binaural audio, has shown that some of the ‘tricks’ we take for granted in non-VR applications don’t work as well. This article is a summary of a few things that I’ve learnt, as a designer, when dealing with such technologies.
This article is by no means exhaustive. My hope is that it can be expanded as more sound designers experiment in this area. I’ve also made a copy of this article on a wiki which I hope to update as I continue work in this area (it is on wiki to facilitate community contribution!). I’m also currently working on a short playable game t
I’ve been working on a game project on and off over the past year and a part of the design is of relevance to this month’s theme — animals. The gameplay revolves around creatures of various kinds — some good, some evil, some tiny, some large. I had to conjure a vocalisation system that achieved the following technical and design criteria:
- Actions by the user would directly affect the state (and sound) of the creature
- The player must be able to perceive some sort of emotive response from the creature
- A modular system which would work for various creature types and characters
- With mobile devices being the primary target, it had to be simple, effective and portable
- Low CPU and memory usage, which translates to maximising the design capabilities of the system with little DSP and few samples
As with most people, I’ve found creature/animal vocalisations easier to design when using material that consists of either human or animal vocal sounds. It is easier for players (or the audience) to make visual and mental connections if they find something remotely similar to reality. It was important for me to make the resulting design as close to what animals sound like.
I collected sounds that matched the above criteria and then shortlisted them based on recording quality (to ensure maximum quality after subjecting them to DSP mangling), character (sounds that created an image or an emotion in my mind) and frequency content (important when grouping sounds together).
‘Emotion’ is tough to parametrise or quantify. It is a loose descriptive and can mean different things to different people. Instead of going after specifics, I put down a list of questions to help me made decisions:
- What size does the sound convey? (the relative size of the animal)
- Is it irritating, menacing, timid or defensive? (dogs were a good reference for this)
- Does the sound convey speed and energy? (this is related to the previous question)
- Is there enough content to make the creature expressive and not boring? (player-creature encounters were expected to last a few minutes)
- Is the sound distinctive enough? (it is easy to get lost down the rabbit hole of perfection)
In the run-up to this month’s reverb theme, former contributor Damian Kastbauer suggested we re-run this article he put together discussing the game Crackdown for XBOX. The article may be two years old, but the content remains undeniably relevant. Never one to ignore good suggestions, here we are…
One area that has been gaining ground since the early days of EAX on the PC platform, and more recently it’s omnipresence in audio middleware toolsets, is Reverb. With the ability to enhance the sounds playing back in the game with reverberant information from the surrounding space, you can effectively communicate to the player a truer approximation of “being there” and help to further immerse them in the game world. While we often take Reverb for granted in our everyday life as something that helps us position ourselves in a space (the cavernous echo of an airport, the openness of a forest), it is something that is continually giving us feedback on our surroundings, and thus a critical part of the way we experience the world.
UK based Audio Designer Samuel Justice has posted an interesting blog post discussing the importance of early reflections in recreating authentic sounding 3D environments. An excerpt of his article is below, and you can read the full article, with audio examples here
Game audio is at an exciting turning point these days, not only do game makers realise the full potential of engaging immersive audio (and the negative effect of a product lacking in this) but us sound designers are now given the responsibility and freedom to create an entire audible world with as much creativity as we can muster (within the given time frame).
The recent generation (and history) of games is a testament to how the industry is home to some of the most creative sound designers around, you only have to listen to their work and you are instantly transported to another world, created entirely by their vision and expertise. Game audio engines are as well more powerful then ever, talented audio programmers have been able to model occlusion, diffusion, diffraction and a whole other slew of wonderful processing effects that help players immerse themselves into the worlds we create.
But this article is not about praising sound design, or sound designers. Instead, what I hope to achieve is to bring about the importance of a major feature that is missing from a lot of game audio engines, or is not being used. It is one of the pinnacle processing effects (in my opinion) that glues audio into the environment and allows it to blend in naturally, thus, not breaking the all important immersion.
I’m talking about early reflections.