It’s a badge of honor to be able to work with precision and shape sound and music to fit our vision. But unpredictability can keep us as well as our audience on edge in a delightful way. How can we let our sounds get away from us?
Keeping it Fresh
A Really Cool Thing about making interactive sound is that the final product can still surprise you, the creator. Yeah, sometimes that’s because it’s a bug, but what I’m really talking about is those moments where the combination of human input, simulation, and your set of playback rules coalesce into an organism and do something good, but unexpected. It’s thrilling to hear it happen for you, and if it comes together in the right way, the audience will perceive it as having a special quality of life about it. So what are some ways to make that happen?
The first thing to come to mind when talking about creating unpredictable behavior is randomness. Fire a random sound, generate a drunk walk to modulate an effect, use noise to create procedural ambiences. Most of the time, you’ll be specifying a range or a cooldown for the randomness, lest it become actual noise. As techniques for creating uncertainty go, I’d call this one a “subtractive” process. By that I mean you use randomness to create the uncertainty, but then shape it with a variety of math functions to rein it in to where you want to be.
While randomness is a perfectly good tool to use when creating emergent behavior, its primary drawback is that it’s the randomness that’s governing the changes over time, and not something directly specified. So what about deterministic uncertainty, or put another way, complexity sufficient to hide the rules?
One of my favorite techniques for creating generative music is phasing. Here’s one definition of phasing:
[perfectpullquote align=”full” cite=”Wikipedia” link=”https://en.wikipedia.org/wiki/Phase_music” color=”” class=”” size=””]Phasing is a compositional technique in which the same part (a repetitive phrase) is played on two musical instruments, in steady but not identical tempi.[/perfectpullquote]
A looser definition I like is that phasing occurs whenever two or more musical parts loop in such a way that they eventually go out of phase with each other (same idea as phase cancellation, but more musical!). It may have gotten its name from the tape compositions of Steve Reich, but the concept’s been in use about as long as there’s been music. In West African drumming, for example, polyrhythms play a huge role in creating the overlapping, evolving sound of the music. With this definition, you can see how variable-length step sequencers, Euclidean rhythm generators, and Markov chain selection mechanisms can be applied to produce this effect.
But it doesn’t have to be music. Phasing could just as easily be applied to your non-musical sound design. For example, you could use a variable-length sequencer to trigger environmental sounds, creating a rich, non-repeating ambience. Or try it out on something less atmospheric, like overlapping a handful of granulators with different pitch and speed settings to compose an engine sound.