In my previous post I mentioned about thinking of sound as data. What is this data? It could be the frequency of an oscillator, the resonance of a filter, the grain size of a granulator, pitch, amplitude…and so on. We could probably fill pages worth of parameters that can either define or control the qualities of a sound.
If we can break down such parameters into numbers, we have access to a dataset that we can change or define:
10000 (Hz), 6 (dB), 1.6 (Q), C5 (pitch), 50 (ms)
By changing these numbers, we can change the qualities of a sound. Think of it like automation in a DAW, although without the limitations of a mixer layout. We can affect the design and make controlled decisions that affect the outcome of the process.
How do you change these numbers? It could be direct user input (mouse/keyboard), other human interface devices (MIDI controllers, joysticks, cameras, etc) or other data (sensors, twitter feeds, weather information, news, etc) that we can patch in. Maybe the frequency could control the resonance, or time could control the pitch, or a MIDI note (from a controller) could control the amplitude, or the X axis of a joystick could control the grain size or…
..the options are limitless.
Patching the X axis of a joystick directly to the frequency of an oscillator is probably not going to work as we would like it to. The numbers that the joystick spits out might not be useful for the oscillator or rather not useful for us as designers. It could be in the range of 0-127, 0-255 or 0-4096 (as an example). Numbers from 0-20 might not be of much use for us (unless you plan to bring a building down with massive subwoofers). Its the numbers from 20 to 20000 that interests us because when translated to frequency they fall into the audible range. This means that we need some sort of a system to translate these numbers into something that works for us.
output = (input * 75) + 20
If our stream of numbers fall within a range of 0-255, we could multiply it by 75 and add 20 to that product. This would translate or scale the stream of numbers to a range of 20-19145. Now if we patch this scaled range to the oscillator, we would end up with potentially more useful sounds. But, there’s a lot more we could do here. We can tweak this scaled stream of numbers further. Maybe we could quantise it to fixed frequencies and end up with a more musically tuned output (maybe a chromatic scale or a major scale or..). We could branch these numbers out and get them to control something else. Maybe the amplitude drops when the frequency rises. Maybe the numbers can be scaled differently so that we could have more resolution at the mid-frequencies.
Again, the options are limitless.
It boils down to having a plan and knowing what the final product must be. Every design decision (every single patch-cord, every number, every object) must sum to create the desired product. Designing sound doesn’t have to be only about changing pitch, using equalisers and changing amplitude. In such a scenario…it can be anything.
Such thinking might be familiar to those of you who work in game audio. You are often using parameters from the game, which you are then sonifying in real-time. The design is no longer just about the performance, microphone, preamp, hardware effect, plugin, DAW, automation, sample rate, pitch…
…its about also using other parameters to make a world come alive. Footstep contact, materials, distances, reverb models, engine models, gestures..and the list can go on.
In addition to helping you design your own creative tools, Max and Pd can help you understand both design and sound with a gigantic magnifying glass. The options to manipulate sound can be almost limitless. They are also great at testing out prototypes before diving into the wide and deep world of audio programming. With options like libpd and gen~ this will only get easier with time.
But, the technology doesn’t matter. What are you designing?
[…] Nair’s recent posts on Designing Sound have been focused around the idea of sound as data, and the flexibility which […]