Categories Menu

Posted by on Jun 4, 2015 | 2 comments

Sulpha: The New PS4 Mastering Suite

Sony for a very long time has spearheaded the effort to standardise loudness in games. The recent PS4 SDK update (2.500) includes a mastering suite — Sulpha analysis tools — to help developers master their titles for a variety of playback systems, from full range surround sound systems to TVs, to mobile devices.

It features a 4-band equaliser, a 3-band dynamics processor, gain and limiter controls and loudness management and analysis tools. The interesting thing about the toolset is that it utilises resources from the operating system and is therefore compatible with game audio middleware, third party engines and all PS4 titles.

I briefly interviewed Garry Taylor, Audio Director at Creative Services Group, Sony Worldwide Studios and Marina Villanueva-Barreiro who is a senior engineer at SCEE Research and Development. I found it interesting that about 50% of users listen to PlayStation titles through their TV speakers. I was expecting the percentage to be much higher.

Sulpha

DS: Sony in many ways has been spearheading the loudness standard for games. Did the development of these tools seem like a natural progression from the development of the standard?

Garry: Very much so.  Having a loudness standard is all well and good, but we needed to make it as easy as possible for developers to hit the PS4 loudness target without having to spend big money on new equipment.  Loudness metering has been part of the PS4 operating system for a couple of revisions now, and this is the next logical step, allowing developers to manipulate overall EQ , dynamic range and loudness easily and quickly.  Smaller teams working on PlayStation titles may not have the resources or technical knowledge required to conform to a standard, so having one easy-to-use audio mastering tool that works on every single title made a lot of sense.

Read More

Posted by on Dec 23, 2014 | 5 comments

Time And Psychoacoustics

When designing audio we are often thinking of time across a large variety of units: samples, milliseconds, frames, minutes, hours and more. This article is inspired by a conversation I had with Andy Farnell about a year ago at a pub in Edinburgh, right before a sound design symposium, where we discussed about time and the role it plays when it comes to designing audio.

Like most other audio designers out there, I started twiddling the knobs and sliders well before I had an understanding of the underlying DSP. It was eye-opening experience to realise that almost every single DSP effect is related to time. So let’s start looking at a few common DSP tools used in everyday sound design and analyse how time and the precedence effect plays a role, starting from hundreds of milliseconds all the way down to a single sample.

Precedence Effect

The precedence effect is a psychoacoustic effect that sheds light on how we localise and perceive sounds. It has helped us understand how binaural audio works, how we localise sounds in space and also understand reverberation and early reflections. From Wikipedia:

The precedence effect or law of the first wavefront is a binaural psychoacoustic effect. When a sound is followed by another sound separated by a sufficiently short time delay (below the listener’s echo threshold), listeners perceive a single fused auditory image; its  spatial location is dominated by the location of the first-arriving sound (the first wave front). The lagging sound also affects the perceived location. However, its effect is suppressed by the first-arriving sound.

You might be familiar with this effect if you’ve done any sort of music production or mixing. Quite often a sound is hard panned to one of the two stereo speakers and a delayed copy (10-30ms) of the sound is hard panned to the other speaker. Our ears and brain don’t perceive two distinct sounds, but rather an ambient/wide-stereo sound. It is a cool technique for creating a pseudo-stereo effect from a mono audio source.

The first 30 seconds in the video below shows an example of the precedence effect in action. The delayed signal smears the original signal with phasing artefacts after which it seems to split from the original signal and become a distinct sound of its own.

Echos And Reverb

Echos are distinct delays. Reverberation is made up of early reflections which are delayed sounds that arrive first at the listener  (right after the direct sound) followed by a tail that consists of many such delays diffused into a dense cluster. Artificial reverbs are quite often approximated using networks of delays that feedback into each other (convolution reverbs behave a differently).

Read More

Posted by on Oct 31, 2014 | 5 comments

What’s The Deal With Procedural Game Audio?

Guest contribution by Martin Roth

We’ve all heard of the promises of procedural game audio. A veritable Valhalla where sounds are created out of thin air, driven by the game engine, eliminating the need for huge sample libraries and tedious recording. Sounds great! So why aren’t we hearing more of it in games today? We’ve all experienced Rockstar’s work in GTA 5; those bicycles sure do sound great! Some indy games such as Fract or Pugs luv Beats have dabbled. But it seems that if procedural audio were all that it promised, it would be much more common. What’s the deal?

The hard truth is that while the idea is great in theory, no one knows what they’re doing in practice. The field is lacking in design principles, tools, and technical performance. This is especially true considering the end-to-end workflow. On one end, high-level tools are needed to give designers the flexibility to explore sound and its interactions. On the other, low-level tools are needed to make those creations available where they’re needed, be that on the desktop, mobile, console, embedded systems, web, or anywhere else. The end-to-end workflow is key to the adoption of procedural audio.

For the purposes of this article the terms proceduralgenerative, and interactive as they relate to sound and composition will be used interchangeably. Their distinction is important, but we’ll leave that for another article.

Scarce Design Resources

The field suffers from a lack of resources to learn how to make procedural audio, including standards for judging its merits. Undoubtedly the best learning resource is Andy Farnell’s book Designing Sound. The presentation focuses on design from first principles, but may leave those without a technical background struggling to understand the reasoning (but don’t let that stop you from reading it!). The book is written for clarity, not for absolute performance or maximum sound quality. Resources are otherwise scattered, usually compensated for by personal interest or continued education specifically on the topic.

Tools, Well Almost

Undoubtedly there many excellent tools available to design sounds, especially musical ones. A near fifty year history of electronic music has created a wealth of knowledge, best-practices, and interfaces for exploring sound. But here the end-to-end argument is critical. Unless the designer can run the sounds on the target platform, the tools are not helpful except as a part of the creative process.

In order to satisfy this requirement, the available tools are generally limited to any number of audio programming languages (or even general purpose programming languages). There include Pure DataMax/MSPSuperColliderCsoundChuck, C/C++, the list goes on. Many of these have robust and knowledgable communities supporting them. All of these tools allow the user to “do stuff” with sound, but how well they meet the needs of sound designers is debatable. Many would say that the learning curve is far too steep. The target audience for these tools has typically been those more interested in experimental work.

This leaves us in the difficult situation where the ideal solution is fragmented between tools that satisfy the high-level design requirements and those that satisfy the low-level technical requirements.

Low-Level Really Is Low

Read More

Posted by on Sep 29, 2014 | 0 comments

Spaces From Noise

white-noise2

Impulse responses are great for recreating spaces, whether it is a resonant glass bottle or a large cave. Here’s a handy a trick for sculpting your own impulse responses, and therefore your own reverbs, from something that we spend a lot of time getting rid of — noise!

If you listen to an impulse response by itself, you’ll find that it has noise-like qualities, except the frequency response changes over time. This isn’t surprising as sine sweeps and pistol shots are representations of bursts of noise.

For the examples below, I’ve used Logic’s Space Designer, but this technique is possible with any convolution reverb. The white noise samples were processed in Logic, bounced out as a wav file and then dropped into Space Designer’s interface. [Space Designer’s dry level was set to 0dB and wet level to -6dB with filter and volume envelopes bypassed]

Here’s an example of a white noise sample that was about 1.5 seconds long with an exponential fade out. The samples below include the dry noise sample (watch your speaker/headphone level) followed by the convolved output (apologies for the rather sad drum loop).

Noise_Dry_Fade

Read More

Posted by on Jul 29, 2014 | 1 comment

Car Engine Approximation In Max

Years ago, when I first started dabbling in the deep and dark world of Max/MSP, I attempted to create the sound of a car engine. This month’s theme (which is ‘vehicles’, if you didn’t know) reminded me about it. I opened up the patch after ages and was a bit appalled by the state of it. There are hidden skeletons in every old patch!

Instead of digging through a dated project, I recreated a patch/idea I had used about a year ago when designing sounds for a remote controlled toy airplane. I tried to adapt the simplicity of that implementation to a ‘regular’ car engine.

Here’s a sample of what it sounds like (all synthesised):

This patch was put together fairly quickly and could do with more refinements to improve the character and reduce the amount of ‘digital-ness’ in the sound. The model quite obviously breaks at higher frequencies/RPMs.

Here’s the patch:

Read More