While it has been out for a while now, I finally got my hands on a review copy of Dehumaniser from Krotos LTD. Dehumaniser has gotten a good bit of buzz in the professional sound design community and rightly so. It is a rock-solid solution for quick and easy monster voices. Dehumaniser is “a software standalone vocal processor that allows the production of creature / monster sounds, efficiently in real time. It is designed to produce studio–quality sounds by using multiple layers of sound manipulation techniques simultaneously. Connect a microphone to your sound interface or even use your computer’s built-in microphone and create astonishing creature sounds in seconds, using your voice.”
The TL;DR version of this review is: Dehumaniser its pretty fantastic and you should probably get it. The speed and quality you get is definitely worth £199. What you make with Dehumaniser you might not use alone, but as a layer in an overall creature/monster vocalization. That said; it is certainly possible to only work in Dehumaniser and get exactly what you want for a vocalization. To do so you will have to dig a bit into the Advanced Mode and take advantage of the Animal Convolution, Pitch Shifting, Dual Plug-ins and many of the other 8 processing channels.
Blindfolded character seemed appropriate.
For this month’s topic of “Psychoacoustics” I thought I’d stretch the definition a bit and finally write an article I have wanted to for a while now, and discuss the sound design of World of Warcraft. Specifically the unscientific observations of someone (me) who has regularly experienced these sounds for fully 1/3 of their life. What I would like to discuss are my own assumptions and observations about what and how they work in a constantly evolving MMO as someone who has played this game extensively. I feel I am in a semi-unique position in having played such a long-running game, while during most of that time having some amount of sound education, and I also write articles for this here site on the interwebs. This article should be viewed in an opinion or editorial context rather than a scientific or academic context.
Guest Contribution by Frank Bry
Check out part 1 of The Making of Thunderstorm 3 SFX here.
In this second and final article I will discuss microphone patterns, recording device pre amp settings, editing and the final mastering phase of this collection. Before I dive into all the technical mumbo jumbo I want to express that when I’m setting up and actually recording thunder and lightning I get quite excited. There must be something in the air, alien mind control beams or just the anticipation of getting the “ultimate” thunder clap or lightning strike. It’s very hard work and involves exercise, listening, tracking the storms and watching the skies. I feel like a kid in a candy shop and I feel the recording is the easy part. So, now we begin. Part 2: The Real Work Begins.
Guest Contribution by Alex May
If you were born in the 70’s or 80’s and played video games, you’ll no doubt have fond memories of the early days of game audio when consoles were incapable of playing back more than basic pulse waves or noise. All sounds had to be forged from these primitives, and game SFX were rarely even slightly reminiscent of anything actually real. Now, however, realistic sound is only as far away as your portable recorder or favourite sound library. Realism in sound has become accessible to the point of it being often considered a given; a basic assumption of the art.
Enter the idea of “100% synthesized SFX”. This is a self-imposed workflow limitation that declares that all sounds for a project will be synthesized, and not recorded. Foley, vehicles, weapons, combat, ambience, UI, and in certain cases even voices; all produced with synthesizers.
Wait, all synthesized? What could we possibly gain from doing things in such an inefficient and impractical manner? Surely it makes better sense to use tools and methods that are appropriate for the results we’re after, right?
Well, yes, that is true. However, being that ultimately we’re aiming to produce sound that complements the visual style of the game, it may not always be the case that recorded real-world sources are the best fit. If the visual style has a strong character about it, then so should the sound. One method for achieving this character is to place limitations on the production process, and that is what this article discusses: limiting sound production to synthesis. By doing this we can achieve an overarching “stylized realism” that, when paired with equally stylized visuals, can contribute to a sense of immersion in the game world.
Let’s now take a look at some work practices for a 100% Synthesis approach.
Guest Contribution by Frank Bry
In this article I will reveal my secrets and techniques to recording decent thunder and lightning. Many, many years and sleepless nights have gone into perfecting the art of recording the thunderstorm and I will finally share. But first, I want to share a little history and tell you how I developed these secrets and techniques. It was not so easy at first and here’s the story I’m still alive to tell. Part 1: Live and Learn.