Guest Contribution by Graham Gatheral
Let’s be honest: code can be daunting. All those words and numbers and operators and punctuation errors… For a start, there’s no GUI. How are you supposed to make anything without a GUI?!
Well, as we’ll see later we can make a GUI-based synth in SuperCollider with just a few dozen lines of code! But let’s put GUIs to one side for now, because SuperCollider’s real power is in its ability to produce flexible and complex dynamic systems directly from code, and without too much trepidation… My aim here is to introduce an audio synthesis programming language to an audience that is, for the most part, more comfortable working with a GUI. So I’ll start off with some simple code examples and then move onto how SuperCollider can use game-code parametric data to drive synthesis ‘patches’ in real-time.
If you don’t have SuperCollider already, download an installer here:
Regarding platforms, I’m on Windows 7 but the code will certainly run fine on a Mac.
Warning: The code examples below were written for demonstration purposes and have not been fully tested. Please be careful not to expose your ears to loud sounds (particularly when using the metal impacts tuner) as stable behaviour cannot be guaranteed. This is especially critical if using headphones!
A Quick Introduction
SuperCollider consists of three components:
- an object oriented programming language
- a language interpreter
- a real-time sound synthesis server
When code is executed, it is interpreted and sent to the server, whereupon the sound is generated.
SuperCollider has had an Integrated Development Environment (IDE) since version 3.6, which is great because now you have everything you need in one place:
- Code editor (where you write your code!)
- Help browser
- Post window (shows the outcome of your code, including any errors)
- Document browser [not shown below]
Photo by flickr user jm3
April is here, and with it comes our next featured topic. This month, we’re going to be delving into the world of DSP environments…Max/MSP, Pure Data, Supercollider, Csound, etc. These tool-sets all have a bit of a learning curve, but they present an opportunity to do something truly different from the typical processors we use every day. What’s spectacular about these is that they let you build your own tools.
There’s no doubt that getting into them requires a different mind-set than the standard tools of our trade, but the flexibility and creative opportunities they allow…and often foster…can go way beyond the everyday. If you were dedicated to the idea, you could probably even build your own DAW inside one of them. So, this month, we’re going to show you just a little bit of what these tools can do. Hopefully, we’ll inspire some of you to take the leap. The water’s pretty deep, but there are some amazing sights beneath the surface.
Next month, we’ll be going a bit looser with the theme, “time.” As always, guest contributions are heartily welcomed. If you’d like to contribute to this or next month’s theme, then contact shaun at designingsound dot org.
There are a few educational opportunities coming up next year, all around the globe, and now’s a good time to start planning for them.
This will take up a fair bit of real-estate, so follow the jump to see what’s coming up.
It may be premature for me to turn the focus of the series towards the future, as we find ourselves deep in the throes of the current generation console development, but I think by now those of us submerged in creating ever-expanding soundscapes for games at times suffer under the burden of our limitations. Of course, it isn’t all bad, given a set of constraints and creatively overcoming them can be as satisfying as coloring outside the lines.
I can’t help but feel a little Sci-Fi on occasion when I see some of the interesting work being done academically or within the DIY community. The explosion of information and accessibility to resources seems to enable those with a mind, and the time, to do so with a bottomless well of potential that, when focused, can provide the maker with something to substantiate their creative vision. Whether it’s the current craze for Kinect hacking, a modular code bending instrument, or simple pleasures of circuit bending, there are communities of people working together to unlock the inherent ability of our modern lifestyle devices. That’s not to say that every hack comes with a purpose, for some the joy is in the deconstruction, destruction, or the creation of something new.
One technique that keeps showing up in game audio is the pairing of an available game engine with a alternative audio engine not generally associated with game audio. Whether it’s the work of Leonard J. Paul using OSC (Open Sound Control) as a bridge between HL2 Source or more recently with Unity, Arjen Schut and his experiments with HL2 and Max/MSP, or this months featured Audio Implementation Greats spotlight: Graham Gatheral, I can’t help but see the resourcefulness of a few brave hearts boldly moving forward to fill a gap in the current functionality of today’s game audio engines.