Categories Menu

Posted by on Jan 27, 2015 | 0 comments

Forging New Territory: Audio Design Education, Non-Traditional Disciplines, & Diversity


Chanel Summers & a Huxley[2]

Guest Contribution By Chanel Summers

As a woman who has built her own career on a platform of game audio, game design and game production, I am passionate about programs that teach and empower women to follow a similar path. As there are such few women in the field of video game audio, fewer are even aware of the opportunities. I have been on a mission to try and change that – trying to introduce this field as a career option to young women and show that women can lead in this field and be highly successful — and perhaps even change the complexion of the video game industry. The reason this is so important is that for an industry or a creative medium to achieve its full potential, it must draw strength from diversity — a diversity of backgrounds, cultures, perspectives, and experiences. Each person approaching opportunity from a different starting point keeps things fresh, vibrant, exciting and new.

That is why I found myself, two years ago, at Forest Ridge School of the Sacred Heart, an all-girl’s school in Bellevue, Washington, proposing a summer workshop called, “Artistic Expression in Game Audio Design”. The workshop would give young women an artistic and technical foundation in audio for interactive media and expose them to the career possibilities in video game audio. It would be based on the class that I created and teach at USC’s Interactive Media & Games Division in the School of Cinematic Arts (“Audio Expression”), taking a semester-long course and turning it into an intensive one month long workshop for Forest Ridge. Because we chose not to “adapt” the material for a younger audience, these girls would get the same material I teach to undergrads, grads, and PhD students. In fact, it would be even more intensive, as they would have class every day for four hours each day rather than once a week. By choosing not to “dumb down” the curriculum for students just because they are younger or new to the field, we showed that we respected the young women, which they in turn responded to with vigor.

Read More

Posted by on Jan 26, 2015 | 4 comments

Learning Audio Middleware Online: Where to Start?

Where to start?

Knowing your way around audio middleware is quickly becoming a required skill to get a job in the game audio industry. If you are a sound designer and/or a composer that is looking to break into the world of game audio, learning how to work with various audio middleware solutions will not only give you a head start and set you apart from the “competition”, but it will also give you a greater understanding of how the technical side of things works and consequently you will have a greater appreciation of the inner workings of game audio. After Audiokinetic and Firelight Technologies announced their free license options (granted with some limitations), making Wwise and FMOD Studio available at no cost for the indies/small game development companies as of last year, now these programs are being used more than ever. There is no reason for you to not employ these options to create a more interactive and coherent soundscape for the game you are working on while also making life easier for yourself and the game developers.

But on the vast sea of knowledge and misinformation that is called the internet, how would you know where to start learning about these programs? Well, this is a guide to hopefully help you with that by providing you with a general outline of which resources and learning options are available right now for you to find out more about audio middleware as quickly and efficiently as possible.

Read More

Posted by on Jan 20, 2015 | 0 comments

Audio Education – A view from the middle

brainstorming-411589_640
Clearly the fates have decreed that I should not only be involved in the writing of a new audio degree as education month comes around, but that I should also be well into my own studies, working towards a Master’s degree in Sound Design. However, in getting to this point, my own audio education has meandered along most of the routes one might take in the pursuit of a career in audio. I’ve volunteered at studios, received on the job (and in the pub) training. I’ve studied at private colleges and run my own studio. Each of these diversions had an intrinsic value and it’s unlikely I would be in the position I am now without having taken them. However, as both a lecturer and a student, I am acutely aware that there are mixed views as to the value of a formal audio education, not just from potential students, but also from employers and practitioners (i.e. this interview from a few weeks ago). So I thought it might be useful to talk a little about the nature of writing an audio degree, from the middle so to speak. (Just to note, I am based in the UK so this relates to the process’s undertaken here. I can’t speak for anywhere else.)

Read More

Posted by on Dec 24, 2014 | 1 comment

Our Favorite Sounds of 2014

 

Photo by Hunter Desportes

Photo by Hunter Desportes

 

The year 2014 has been one of many great articles, interviews, and discussions here at Designing Sound and we want to thank all of our readers for their attention, suggestions, contributions, and overwhelming support. There have been so many great films, shows, games and events this year that we thought we would share some of our favorites for you to go back and check in case you missed them!

This post is full of links and Youtube videos, so please be patient on the loading. I assure you it is worth it!

Read More

Posted by on Dec 23, 2014 | 5 comments

Time And Psychoacoustics

When designing audio we are often thinking of time across a large variety of units: samples, milliseconds, frames, minutes, hours and more. This article is inspired by a conversation I had with Andy Farnell about a year ago at a pub in Edinburgh, right before a sound design symposium, where we discussed about time and the role it plays when it comes to designing audio.

Like most other audio designers out there, I started twiddling the knobs and sliders well before I had an understanding of the underlying DSP. It was eye-opening experience to realise that almost every single DSP effect is related to time. So let’s start looking at a few common DSP tools used in everyday sound design and analyse how time and the precedence effect plays a role, starting from hundreds of milliseconds all the way down to a single sample.

Precedence Effect

The precedence effect is a psychoacoustic effect that sheds light on how we localise and perceive sounds. It has helped us understand how binaural audio works, how we localise sounds in space and also understand reverberation and early reflections. From Wikipedia:

The precedence effect or law of the first wavefront is a binaural psychoacoustic effect. When a sound is followed by another sound separated by a sufficiently short time delay (below the listener’s echo threshold), listeners perceive a single fused auditory image; its  spatial location is dominated by the location of the first-arriving sound (the first wave front). The lagging sound also affects the perceived location. However, its effect is suppressed by the first-arriving sound.

You might be familiar with this effect if you’ve done any sort of music production or mixing. Quite often a sound is hard panned to one of the two stereo speakers and a delayed copy (10-30ms) of the sound is hard panned to the other speaker. Our ears and brain don’t perceive two distinct sounds, but rather an ambient/wide-stereo sound. It is a cool technique for creating a pseudo-stereo effect from a mono audio source.

The first 30 seconds in the video below shows an example of the precedence effect in action. The delayed signal smears the original signal with phasing artefacts after which it seems to split from the original signal and become a distinct sound of its own.

Echos And Reverb

Echos are distinct delays. Reverberation is made up of early reflections which are delayed sounds that arrive first at the listener  (right after the direct sound) followed by a tail that consists of many such delays diffused into a dense cluster. Artificial reverbs are quite often approximated using networks of delays that feedback into each other (convolution reverbs behave a differently).

Read More

Posted by on Dec 15, 2014 | 0 comments

“Hearing Lips and Seeing Voices” – The McGurk Effect

Want to see something that’ll mess with your head?

YouTube Preview Image

Now, you may not have noticed anything all that strange watching the video, but mute the sound and watch it again. After that, close your eyes and listen to just the audio. Notice anything strange now? You’ve just witnessed one of the more interesting perceptual illusions, the McGurk effect.

Read More

Posted by on Dec 11, 2014 | 0 comments

Psychology of 10 Years of Sound in World of Warcraft

Blindfolded character seemed appropriate.

   For this month’s topic of “Psychoacoustics” I thought I’d stretch the definition a bit and finally write an article I have wanted to for a while now, and discuss the sound design of World of Warcraft. Specifically the unscientific observations of someone (me) who has regularly experienced these sounds for fully 1/3 of their life. What I would like to discuss are my own assumptions and observations about what and how they work in a constantly evolving MMO as someone who has played this game extensively. I feel I am in a semi-unique position in having played such a long-running game, while during most of that time having some amount of sound education, and I also write articles for this here site on the interwebs. This article should be viewed in an opinion or editorial context rather than a scientific or academic context.

 

Read More

Posted by on Oct 31, 2014 | 4 comments

What’s The Deal With Procedural Game Audio?

Guest contribution by Martin Roth

We’ve all heard of the promises of procedural game audio. A veritable Valhalla where sounds are created out of thin air, driven by the game engine, eliminating the need for huge sample libraries and tedious recording. Sounds great! So why aren’t we hearing more of it in games today? We’ve all experienced Rockstar’s work in GTA 5; those bicycles sure do sound great! Some indy games such as Fract or Pugs luv Beats have dabbled. But it seems that if procedural audio were all that it promised, it would be much more common. What’s the deal?

The hard truth is that while the idea is great in theory, no one knows what they’re doing in practice. The field is lacking in design principles, tools, and technical performance. This is especially true considering the end-to-end workflow. On one end, high-level tools are needed to give designers the flexibility to explore sound and its interactions. On the other, low-level tools are needed to make those creations available where they’re needed, be that on the desktop, mobile, console, embedded systems, web, or anywhere else. The end-to-end workflow is key to the adoption of procedural audio.

For the purposes of this article the terms proceduralgenerative, and interactive as they relate to sound and composition will be used interchangeably. Their distinction is important, but we’ll leave that for another article.

Scarce Design Resources

The field suffers from a lack of resources to learn how to make procedural audio, including standards for judging its merits. Undoubtedly the best learning resource is Andy Farnell’s book Designing Sound. The presentation focuses on design from first principles, but may leave those without a technical background struggling to understand the reasoning (but don’t let that stop you from reading it!). The book is written for clarity, not for absolute performance or maximum sound quality. Resources are otherwise scattered, usually compensated for by personal interest or continued education specifically on the topic.

Tools, Well Almost

Undoubtedly there many excellent tools available to design sounds, especially musical ones. A near fifty year history of electronic music has created a wealth of knowledge, best-practices, and interfaces for exploring sound. But here the end-to-end argument is critical. Unless the designer can run the sounds on the target platform, the tools are not helpful except as a part of the creative process.

In order to satisfy this requirement, the available tools are generally limited to any number of audio programming languages (or even general purpose programming languages). There include Pure DataMax/MSPSuperColliderCsoundChuck, C/C++, the list goes on. Many of these have robust and knowledgable communities supporting them. All of these tools allow the user to “do stuff” with sound, but how well they meet the needs of sound designers is debatable. Many would say that the learning curve is far too steep. The target audience for these tools has typically been those more interested in experimental work.

This leaves us in the difficult situation where the ideal solution is fragmented between tools that satisfy the high-level design requirements and those that satisfy the low-level technical requirements.

Low-Level Really Is Low

Read More

Posted by on Oct 27, 2014 | 1 comment

The Making Of Thunderstorm 3 SFX Library Part 2

Guest Contribution by Frank Bry

Check out part 1 of The Making of Thunderstorm 3 SFX here.

In this second and final article I will discuss microphone patterns, recording device pre amp settings, editing and the final mastering phase of this collection. Before I dive into all the technical mumbo jumbo I want to express that when I’m setting up and actually recording thunder and lightning I get quite excited. There must be something in the air, alien mind control beams or just the anticipation of getting the “ultimate” thunder clap or lightning strike. It’s very hard work and involves exercise, listening, tracking the storms and watching the skies. I feel like a kid in a candy shop and I feel the recording is the easy part. So, now we begin. Part 2: The Real Work Begins.

Read More

Posted by on Oct 22, 2014 | 0 comments

Synthesis Tips for the Non-Synthesist

Massive_Screenshot

Guest Contribution from Steven Smith

Introduction

In some ways it seems quite strange to find myself authoring a post on synthesis that has as its main topic: “Not everyone needs to be a synthesist”. But from another angle of practicality, it makes a great deal of sense. Many of us already have found ourselves naturally diving into certain areas of synthesis from within the field and somewhat skating around others.  So…  If you are not a synthesis geek, this article is for you. 

‘Why would it be helpful to explore this area?’ you may be wondering. Even though today’s virtual instruments commonly ship with hundreds or even thousands of presets, many users will still find themselves passing over sounds that are not quite right. Yet with some fundamental knowledge and strategies I feel most non-synthesist could quickly address some of these sound’s shortcomings and reshape them close enough to quickly put them in service.

This is precisely my goal. I hope to address some fundamental strategies and principles relating to synthesis and synthesizers in order to facilitate what I like to think of as quick fixes. Even though these strategies will not work 100% of the time, you should find them coming to the rescue quite often. 

From the onset it will be my intention to populate this article with images from multiple synths. This is a small attempt to expose you to as many different views as possible. Given that each synth designer has its own GUI strategies (in addition to its own sound design strategies), I hope this will further help the usefulness of the material presented.

There is also a body of knowledge that we must have to enable us to find sounds, change them, and then Save these changes. Let’s jump in…

Read More