My friend David Steinwedel has sent me an interesting article he wrote about the audio implementation process of a game called Ace Attack, coming soon for the iOS platform.
Ever since Pong premiered, audio implementation in games has mostly been thought out in the same way: For every action on the screen, a sound must play with that action. This one-to-one relationship between actions and sounds is the most basic strategy to implement sound in a game. However, it is often not the best approach for maximum impact.
Our first title, Ace Attack, is a great example of a game that would sound terrible using the one-to-one relationship with sounds and actions. The game is a reinvention of the card game War. However, we added flavor by having military units associated with the cards which battle it out every round. A Two has a unit of Riflemen, a Four has a unit of Machine Gunners, and so on up the line. Each unit is made of a set of individuals who all play Warmup, Firing, and either Death or Victory animations (depending on the outcome). The starting point of these animations is randomized to keep things fresh.
Because there is so much happening on screen at once, a one-to-one method will have tons of sounds train-wrecking together and end up sounding like garbage. In fact, I’ve mocked up an example using the one-event, one-sound method.