Quadrant is a new modular sound generator and effects processing plugin geared towards experimental sound design. It features a broad selection of modules, connectable through a graphical patching system, allowing for a very wide and customizable range of sounds and effects. The plugin can be used to generate textures, or as an effects processor, providing a number of different ways to create uniquely futuristic sounds.
Ira Greenberg is director of the Center of Creative Computation and is a professor at SMU. He’s had a career as a painter, animator, designer, programmer, arts director, professor and author. In addition to having lectured at universities across the world, he’s authored three books on creative coding, including the first major reference for Processing. His current research involves the development of an “idiosyncratic” 3D graphics library, titled “Protobyte“. Protobytes are algorithmically generated virtual lifeforms based on mathematic expressions.
In addition to my work in sound, I have dabbled around with creative programming languages like Processing and openFrameworks. There is something quite exciting about thinking of code as a design tool. This thinking extends even beyond visual coding languages to graphical sound languages like Max/Pd to more hardcore languages like C. If an equaliser or compressor could be a creative tool, why not code at a much more granular level? This gets even more interesting when thinking of procedural sound and synthesis, where sound is designed using algorithms inspired by the real world.
Ira recently visited Edinburgh with his students and was kind enough to spend some time with me and share his thoughts for Designing Sound. There’s something magical in the overlap of design, code, creativity and logic.
DS: Ira, lets start with your background.
IG: I was trained in painting, both in undergraduate and graduate school. Previous to that I had studied sciences a little bit. I was interested in sciences but as I progressed through school I got more interested in arts and creative writing. By the time I graduated from school I decided that I pretty much wanted to be a painter. I did that for a number years — my schooling was between ‘84 and ‘92 — and towards the end of my graduate school years I got involved in computer graphics a little bit, more pragmatically to pay the bills and I got involved in some graphic design work. My interest originally was completely in studio art. As I
Guest contribution by Michael Theiler (Kpow Audio)
Situating an Ambience
When creating ambiences for games (this applies equally to film), I am striving to make them blend into the background and not mask any important in game sounds. For most ambiences, these are the most important qualities that I am attempting to resolve.
In order to achieve this, I need to firstly focus on the repetition and timing between audio occurrences in the sounds. This means spacing sounds, and adding and removing sound occurrences in my audio sequence. I then work on the frequencies in the sounds, using equalization to mold them into the right sound. Finally, I work on their sound propagation and the sound of the space in which they are to inhabit. These are the steps necessary to mould sound into something suitable for the space. Just adding reverb is not enough – the sound needs to be purpose built for the space’s reverberation and delay treatment.
When building the space that sounds occupy, it may sometimes seem like overkill to load up those DSP hungry (though wonderful) reverb plug-ins. They can be a pretty big load when it comes to even the mildest amount of spatialization, and it can also be time consuming to configure one to emulate an outdoor space. I thought it would be worth sharing a trick of mine for both situations. A low-to-no DSP method of spatialization that allows us to “fake the space.” I should qualify this. We aren’t going to be looking at how to fake a traditional reverb, but how to add a mild sense of space to the sounds we work with. We’re just going to give them a little bit of multichannel presence to better situate them in the world. Before I get into the specifics of the process though, let’s review some science!
There’s a great little article over on The Pro Audio Files exploring the difference between Linear and Minimum Phase EQ’s. I’ve been lucky enough to talk about it with some people far smarter than I am at AES conventions, and the article provides a concise but pointed introduction to the subject.
One of the most popular searches relating to linear-phase EQ on Google is “linear-phase explained,” which still holds very poor results. After the search, I was still grasping at straws. I said “to hell with it,” and held my own experimentation.
I created a new Pro Tools session with an oscillator at 1 kHz and line level running separately through a minimum-phase EQ plug-in and a linear-phase EQ plug-in. I recorded the results and was astounded by the differences when I boosted or cut frequencies at any bandwidth or frequency.