Categories Menu

Posted by on Nov 30, 2012 | 2 comments

Thanks to our November Contributors

We just wanted to take the time to thank our guest contributors this month:

For our featured Backgrounds and Ambiences articles…Chris Groegler, Chris Didlick, Douglas Murray, and Tim PrebbleYann Seznec, Peter Chilvers, Robert Thomas, and Stephan Schütze for discussing interactive mobile applications…and thanks to Chuck Michael and Craig Henighan for sharing their thoughts on Dolby Atmos (as well as Josh Gershman and John Loose from Dolby for providing us with a little more data). And a big thank you also goes out to Ariel Gross for sharing his thoughts, and Brady Dyck for his interview with Rob Bridgett.

Thanks again gentlemen!

Remember…this is a site for the community, by the community. If you would like to contribute in the future, drop us a line.

Read More

Posted by on Nov 29, 2012 | 4 comments

Ambience and Interactivity

[We are currently having a few problems with page formatting and embedding SoundCloud links, please bear with us until we fix it!]

‘Ambience’ is a word with a broad definition. It is perceived differently and can mean different things to different people. We are constantly surrounded by a wide range of sounds – some natural, some man-made. Ambient sounds don’t necessarily have to exist in the real world. We’ve all felt the power of music and know how subtle changes in a mix of sounds can add up to affect us at a subconscious level.

Continuing on that train of thought, I dug into the world of ambience in interactive mobile applications with:

Yann Seznec: Musician, sound designer, artist and founder of Lucky Frame – designers of award winning iPhone apps – Pugs Luv Beats, Bad Hotel
Peter Chilvers: Musician and software designer, best known for the series of iPhone apps created with Brian Eno – Bloom, Trope, Air, Scape
Robert Thomas: Interactive composer and CCO at RjDj – The Inception App, Dark Knight Risez Z+, Dimensions
Stephan Schütze: Composer and sound designer, Director of Sound Librarian and creator of the iPhone app Carmina Avium


DS: What does ambience mean to you?

YS: From a sound perspective, to me ambience is really a situational thing – it is sound that is occurring whilst something else is happening. So my current ambience is the hum of the computer, the typing of someone else in the room, a plane passing overhead outside. Those same exact sounds would stop being ambience in a different situation, so it’s quite relative.

PC: I think of ambience as the set of near-subliminal cues that quietly define an environment. They might be continuous sounds, like the rush of a river or distant traffic noise, or occasional point sounds, like the drip of a tap or a car horn. They invariably carry information the conscious brain is rarely aware of in the reverberation of the sound; they tell the size of the room the listener is in, the materials on the walls, the distance and location of objects and so on. I sometimes contemplate wearing a blindfold for a day to increase my awareness of these sounds. Then I remember that I work all day with a computer screen, and think better of it.

Ambience can be interesting when it’s used to evoke an imaginary world. The same ideas of distance and presence still apply, but the sound sources are unreal while still “natural” sounding. Obviously Brian [Eno] is the master of this kind of domain! With Scape, [the app that we recently released for the iPad,] we’ve provided a library of these types of sources, which the user can recombine in any way they want; it’s as much a way of building worlds as it is building music.

RT: To me its something to do with a place (which may happen to be real or virtual) which forms your state of mind or shapes your perception.

SS: Ambience is significant for me for a variety of reasons and I will refer to an App I recently launched to cite some examples, but I want to start with a brief story.

Here in the southern hemisphere it is springtime. My wife and I rent a small unit on a block with four residences and each have a very small garden space; nothing amazing but quite pleasant. Our neighbourhood in general is quite good for trees and plants. A few weeks ago we had the perfect combination of season, weather and environment. As I sat in our garden with my eyes closed I could hear about a dozen species of birds across the neighbourhood and smell the glorious scent of dozens of different flowering plants. I could have been anywhere in the world, on top of an amazing mountain range, in a royal garden or a magic forest. It was the non visual equivalent of a stunning landscape. Ambience is not limited to non visual components of our environment, but because we are so visual based as a species we tend to gather direct information from our eyes and emotional influences from our other senses.

As audio designers this provides us with an incredibly powerful method of communication. We can suggest locations, time or seasons, emotional states, potential risks and other information by what we present as an ambience. Carmina Avium is an App I spent the last 12 months developing. At its core it allows users to select from bird species and create their own ambience. It uses generative audio to create non-looping, non repetitive personalized ambiences in real time. In this case I have created a product to allow a user to simulate a natural environmental ambience wherever they may be located. Most people associate the sounds of natural ambiences with relaxation and focus, but I could have easily created the same App to create scary or spooky ambiences for Halloween or dramatic ambiences to which to jog to.

Scape

Read More

Posted by on Apr 7, 2012 | 0 comments

RjDj – Crafting ‘Dimensions’

Robert Thomas and Joe White from RjDj, known for crafting interactive sound-musical worlds on iOS devices, were kind enough to spend an afternoon sharing their thoughts on interactive soundscapes and music, technical and creative limitations, Pure Data, procedural techniques and their latest app Dimensions. You most definitely would have heard of their Inception App‘, released over a year ago.

DS: Tell us about RjDj

Robert: RjDj was formed in 2008 but was actually initially conceptualised as far back as the 1990s. The first app that was released was very unusual for that time because it was doing things with sound from the microphone and using the accelerometer in ways that was not really supported in the SDK. Since then we have done a wide range of different apps, which are about exploring interactive music or reactive music or augmented music and that has gone from RjDj to Rj Voyager through working on projects like the Inception app to Dimensions, the latest one, which is veering more in to the world of games. RjDj was formed by Michael Breidenbruecker, who is one of the cofounders of last.fm and he is the CEO and driving force behind the company. My role is more in terms of music composition and looking after the sonic side of things, and Joe is also working in that area and a bit more oriented toward the reactive infrastructure of how we make music happen in relation to events. He also works a lot more on sound design that I do.

DS: Most of what you do is in Pure Data (Pd). Do you begin the composition and design process in a regular DAW and then migrate to Pd?

Robert: We use a conventional DAW of some kind and export bits that get reassembled in real time based on whatever rules are appropriate for that situation. Much of Dimensions was done in Cubase and some of it was done in Logic and then exported as tracks which are some times stacked one on top of each other and fade in and out based on different user interaction. There are individual hits that get algorithmically put back together on the fly – drum patterns and things like that, and layers of reactively triggered synthesis.

Joe: In our earlier projects we were trying a lot more to reconstruct all the music inside Pd but we obviously had performance hits and we had to optimise it all. In Dimensions, we created all the assets in the DAW and mixed it as well as we could so that we could concentrate on how it played back in Pd. How we construct the assets in Pd is just as important, as it needs to be simple while achieving the same sort of experience.

Robert: In 2009 we did a project that was extremely ambitious. Almost everything in there was algorithmically put together in the mix. It was all stemmed material or individual tracks that were put back together in Markov states and various trees of possibility and based on the user behaviour and all kinds of things. We found it very difficult to achieve a very good level of quality if we were that ambitious with the structure. Also, a lot of the end listeners couldn’t understand the level of complexity we were producing in terms of the structure and the variants of the music. For them it was just music. Now we concentrate on making one very obvious reactive thing over more linear music.

Read More