Categories Menu

Posted by on Nov 5, 2014 | 0 comments

Invasive Insects and Immersive Sonics – Ren Klyce on the Sound of Gone Girl

GoneGirlposter

Director David Fincher and sound designer Ren Klyce has worked together for more than 20 years, and their ongoing partnership is one of most acclaimed collaborations in the modern film sound community. Klyce has been nominated for five Oscars – one for Fight Club, one for The Curious Case of Benjamin Button, one for The Social Network and two for The Girl with the Dragon Tattoo.

Their latest work is the very successful marital thriller Gone Girl which has just overtaken The Curious Case of Benjamin Button as director Fincher’s highest-grossing film in the US. The movie is filled with so many twists and turns that you can’t really talk about it without revealing something – and this interview with Ren Klyce also contains spoilers, beware!

Read More

Posted by on Oct 31, 2014 | 4 comments

What’s The Deal With Procedural Game Audio?

Guest contribution by Martin Roth

We’ve all heard of the promises of procedural game audio. A veritable Valhalla where sounds are created out of thin air, driven by the game engine, eliminating the need for huge sample libraries and tedious recording. Sounds great! So why aren’t we hearing more of it in games today? We’ve all experienced Rockstar’s work in GTA 5; those bicycles sure do sound great! Some indy games such as Fract or Pugs luv Beats have dabbled. But it seems that if procedural audio were all that it promised, it would be much more common. What’s the deal?

The hard truth is that while the idea is great in theory, no one knows what they’re doing in practice. The field is lacking in design principles, tools, and technical performance. This is especially true considering the end-to-end workflow. On one end, high-level tools are needed to give designers the flexibility to explore sound and its interactions. On the other, low-level tools are needed to make those creations available where they’re needed, be that on the desktop, mobile, console, embedded systems, web, or anywhere else. The end-to-end workflow is key to the adoption of procedural audio.

For the purposes of this article the terms proceduralgenerative, and interactive as they relate to sound and composition will be used interchangeably. Their distinction is important, but we’ll leave that for another article.

Scarce Design Resources

The field suffers from a lack of resources to learn how to make procedural audio, including standards for judging its merits. Undoubtedly the best learning resource is Andy Farnell’s book Designing Sound. The presentation focuses on design from first principles, but may leave those without a technical background struggling to understand the reasoning (but don’t let that stop you from reading it!). The book is written for clarity, not for absolute performance or maximum sound quality. Resources are otherwise scattered, usually compensated for by personal interest or continued education specifically on the topic.

Tools, Well Almost

Undoubtedly there many excellent tools available to design sounds, especially musical ones. A near fifty year history of electronic music has created a wealth of knowledge, best-practices, and interfaces for exploring sound. But here the end-to-end argument is critical. Unless the designer can run the sounds on the target platform, the tools are not helpful except as a part of the creative process.

In order to satisfy this requirement, the available tools are generally limited to any number of audio programming languages (or even general purpose programming languages). There include Pure DataMax/MSPSuperColliderCsoundChuck, C/C++, the list goes on. Many of these have robust and knowledgable communities supporting them. All of these tools allow the user to “do stuff” with sound, but how well they meet the needs of sound designers is debatable. Many would say that the learning curve is far too steep. The target audience for these tools has typically been those more interested in experimental work.

This leaves us in the difficult situation where the ideal solution is fragmented between tools that satisfy the high-level design requirements and those that satisfy the low-level technical requirements.

Low-Level Really Is Low

Read More

Posted by on Oct 22, 2014 | 0 comments

Synthesis Tips for the Non-Synthesist

Massive_Screenshot

Guest Contribution from Steven Smith

Introduction

In some ways it seems quite strange to find myself authoring a post on synthesis that has as its main topic: “Not everyone needs to be a synthesist”. But from another angle of practicality, it makes a great deal of sense. Many of us already have found ourselves naturally diving into certain areas of synthesis from within the field and somewhat skating around others.  So…  If you are not a synthesis geek, this article is for you. 

‘Why would it be helpful to explore this area?’ you may be wondering. Even though today’s virtual instruments commonly ship with hundreds or even thousands of presets, many users will still find themselves passing over sounds that are not quite right. Yet with some fundamental knowledge and strategies I feel most non-synthesist could quickly address some of these sound’s shortcomings and reshape them close enough to quickly put them in service.

This is precisely my goal. I hope to address some fundamental strategies and principles relating to synthesis and synthesizers in order to facilitate what I like to think of as quick fixes. Even though these strategies will not work 100% of the time, you should find them coming to the rescue quite often. 

From the onset it will be my intention to populate this article with images from multiple synths. This is a small attempt to expose you to as many different views as possible. Given that each synth designer has its own GUI strategies (in addition to its own sound design strategies), I hope this will further help the usefulness of the material presented.

There is also a body of knowledge that we must have to enable us to find sounds, change them, and then Save these changes. Let’s jump in…

Read More

Posted by on Sep 29, 2014 | 0 comments

Sonic Architecture

Expo 1958 paviljoen van Philips

The Philips Pavilion, based on hyperbolic paraboloids originally used in Metastaseis musical piece by Iannis Xenakis

“Sound is a spatial event, a material phenomenon and an auditive experience rolled into one. It can be described using the vectors of distance, direction and location. Within architecture, every built space can modify, position, reflect or reverberate the sounds that occur there. Sound embraces and transcends the spaces in which it occurs, opening up a consummate context for the listener: the acoustic source and its surroundings unite into a unique auditory experience.”

OASE

The spatial metaphor

Over the years, the relationship and analogy between music/sound art/sound design and architecture has been explored in several aspects. In the same way architecture works over the solid materials, visual spaces, geometry, abstract realities or social contexts, it does over the aural realities, the sonic dimension. When it comes to space, sound can be valued in an architectural process, just as architecture is also sonic.

Although when it comes to music, there has been a discussion on the validity of the analogy between the musical space and that of architecture, and there’s also some way of relating both concepts in the role of sound design, since it doesn’t rely in a fixed language as some music is, and it’s always open to the contexts in which it evolves or in which it is developed, such as a film. Space in terms of sound design is immensely important, both in terms of the visual/outer spaces projected in a particular audiovisual medium, but also in the inner, abstract or invisible faculties of a piece like a film or a videogame, thus introducing the possibility of creating architecture with aural elements in the same way the visual aspect creates its own spaces and objects.

Read More

Posted by on Sep 18, 2014 | 2 comments

Real Spaces

Image by Stewart Butterfield. Used under a Creative Commons license. Click picture to view source.

Image by Stewart Butterfield, used under a Creative Commons license. Click image to view source.

When we say “space”, people generally think of two things: outer space, or a bounded area that something fits into. It’s a safe bet that most people in the sound community immediately think of the latter. So often we focus on the characteristics of a space…how far a sound carries, reflections and reverberation time, etc. Certainly that helps us define a space, but…for the most part…only on a technical level. What really defines a space, is what occupies it. There’s no denying that production designers and location scouts in film, or level designers and artists in games, have a strong role in creating a space, but we in the sonic branch of our respective mediums have the unique ability to refine…or even redefine…those spaces they create. Sometimes, we’re even given the opportunity to create spaces where they cannot. What I want us to consider in light of that, is how we approach the creation of that space.

Read More