Unfortunately, the Erik Aadahl Special is coming to the end. Here’s the last interview I had with Erik, this time talking about the sound design on the two Transformers films. We talk about everything, from the initial aspects to the creation of the robots sounds, the mix, and more.
Designing Sound: Let’s start with the beginning… what were your initial thoughts about the sound design on Transformers when reading the script for the first time?
Erik Aadahl: When I first read the script, I remember thinking how huge it seemed. But I hoped that there could be more to the sound track than just “big and loud” Bay-hem. Fortunately, from the very first scene there were sound moments built in for us to exploit.
“Transformers” opens with the Soccent Airbase sequence, where a Decepticon combat helicopter named “Blackout” hacks into a secret military computer network. The script describes a terrifying alien “shriek” as the bad robot uplinks to the network.
This “shriek” is the only clue that investigators have to the origins of the attack. Ethan Van der Ryn worked on this sound before I ever came on the show, and it gets referenced throughout the movie.
Also, we knew we had to pay homage to the classic original transformation sound. The original was a really simple, iconic sound that everybody remembers and loves. Our hope was to find that iconic quality in the new movie.
DS: How was the work with Michael Bay? What’s the importance he gives to the sound of Transformers films?
EA: Michael has said many times that sound is 50% the movie-going-experience. He told a story about Spielberg telling him it was “30%”, and Michael countered, “Well, we have room to negotiate”.
As soon as Michael’s picture cut starts to come together, he wants to hear sound. I don’t blame him; the picture works a lot better when the sound is good. And vise versa. So we try to get as much done early as possible. In the case of the second movie, we were already well into editing before principle photography had wrapped, because of a strike-induced compressed schedule.
DS: And what about the relationship with the Visual FX team? Any strategic alliance there to have a better union between sound and visual effects?
EA: Ethan knew the ILM guys from his days working at Skywalker Sound up north. They would get our tracks, and hopefully some ideas. Bay Films had a high-speed video uplink set up to ILM in San Francisco. But for our crew here in Los Angeles, it was a little tougher to have direct contact with those fellows. Fortunately, they sent us artwork, and we’d ship them sounds, so there was some back and forth.
On the first film, I noticed how the animators gave attention to our track. We put in a temp sound effect for Starscream flying by, before it had been animated and all we could see was a plate of Hoover Dam with a camera pan. When we finally got the animation, the animators had exactly animated to our F22 jet sound temped in to the blank visual. We didn’t have to even adjust sync.
DS: I think one of your best challenges was to give different life to each big robot with sound, retaining the idea of they’re a same species. Could you tell us about the use of sound to enhance each character
EA: The biggest part of the job was to create unique sounds for each of the robot characters. Our sounds needed to convey each robot’s “soul”, a sonic reflection of its spirit. We also wanted to avoid existing robot clichés and think outside the box stylistically. The goal was to create sounds we’d never heard before.
So in the beginning we defined philosophies for each robot. Ethan and I sat down with a pad of paper and brainstormed all the different characters and what “signature” we could give each one. Every robot needed its own sound personality: Optimus was based on air, Bumble Bee on buzzing, Jazz on jazzy rhythms, Ratchet on ratchets, Ironhide on heavy iron and weaponry, Blackout on rhythmic chopping rotors, Starscream on screaming turbines and so on.
Bumblebee’s entire vocal performance was based on sound effects that our team created from scratch, so in effect we vocally “acted” for him. Most of it was our own voices, Mike Hopkins and my voices processed. I have a friend who says that when Bumblebee lost his legs in the first film, and cried out for Sam’s help, it made her cry. I think Bumblebee wound up being the most emotive character.
Texture and realism was a critical task for the foley department, who collected props specific to each character’s personality to make every robotic joint and ligament feel real. John Roesch, Alyson Dee Moore and Mary Jo Lang made the “sonic glue” that held these robots together.
Two of my favorite scenes from the movies are when Bumblebee communicates through his radio to Sam. P.K. Hooker took up the task to search through Paramount movies for lines that conveyed Bumblebee’s meaning. Michael had given us lots of latitude, and it was P.K. who found those takes from Star Trek, Chris Farley, John Wayne, and Jimmy Stewart. You even hear P.K.’s voice preaching a sermon to convey that the Autobots had come “from the heavens above, hallelujah”. With due credit to the screenwriters, P.K. in a sense “wrote” many of Bumblebee’s lines in those scenes.
DS: How you do to make the difference between Decepticons and Autobots? How does the sound identify the Evil side and the good side?
EA: With the overwhelming action taking place on screen, a goal was for the audience to be able to distinguish between “bad” and “good” by simply listening.
In music, why does a minor chord sound ominous, and a major chord uplifting? We are somehow to programmed to interpret combinations of frequencies emotionally. It is a mystery of the brain, and one that sound can exploit exquisitely.
The “zang” sounds for the Fallen are purer and more fundamental than his fellow Decepticons, because he is the original Decepticon. We wanted his energy to sound dangerous and volatile, synthetic and unreal, like a mythic version of the other robots. His sound, which originated from a glitch in our software that wound up making a whole variety of fantastic zangy Fallen sounds, feels instinctively dangerous — though I’m not really sure why. Maybe it resembles buzzing electricity that implies danger.
Perhaps the clearest distinction is between Optimus and his evil brother, arch-nemesis Megatron. Both have “airy” sounds, but Optimus is smooth and warm, and Megatron is hissy and sharp. Though both are “airy”, one feels “good” and the other “bad”.
Also, vocals really help convey personality. For the bad guys, we often use vicious animals or ominous deep growls. The good guys breaths and efforts tend to be smoother and friendlier.
DS: And what about the little species? There are a lot of fantastic little robots. How was the sound design decisions for that little ones?
EA: In “Revenge of the Fallen”, the Reedman sequence was my favorite. Thousands of tiny little ball-bearings combine into a razor sharp robot named Reedman. Our goal was to play the opposite of “big” …. to get tiny, quiet and intimate. To make the audience lean in, not get pushed back. For me, the scene plays like a symphony of little sounds.
We made Reedman with buzzing magnets, surging air rifle beebee pellets, rolling metal ball-bearings, “chiming” steel washers dangling from strings, and a bunch of zippery sounds we constructed out of thousands of little metal clinks. His gurgling chatter was voiced by Reno Wilson and his shrieks by Frank Welker.
In the “hut scene”, a little Decepticon fly scout comes buzzing through the wall, and Sam catches it. I was trying to figure out how to make its buzzing sounds, and one morning while shaving it came to me: my electric razor! I wiggled it around like a fly, and that became the sound for the little critter’s wings.
We also had fun with the “kitchenbots”. We made some of their little machine guns with a typewriter and their missiles with bottle rockets. Their vocals are combos of human and small animal vocals. My favorite is the warthog garbage disposal.
There’s a character named Alice, an attractive coed that turns out to be a killer robot. Ethan and I originally experimented with using orgasm-type screams for all her vocals. We used our own voices, embarrassing to hear before tweaking them. We pitched our recordings to sound female and processed them to sound robotic. It was a little over the top so we toned it down, but you can still hear a few of those sexy sounds in the track.
In “Transformers”, we meet a vicious little Nokia cellphone robot that fires wildly at everyone. Towards the end of the schedule we had an idea: we thought it would be fun to have him yelling in Finnish, since Nokia is from Finland. We recorded some Finns cursing in their language, and pitched them chipmunk-style to fit into the little robot. But Michael had already fallen in love with an earlier sound, so the Finnish curses wound up on the cutting room floor.
I also really enjoyed the little Decepticon Frenzy from the first film, transformed from a boombox to infiltrate Air Force One. His voice was performed by Reno Wilson, who can do amazing contortions with his throat. To supplement the vocal performance, we made Frenzy’s growls, chatters and movements. His clicking growl was made with ticks from a metal wind-up clock, synced to the waveform cadence of a young cougar growling.
DS: How was the processing and treatment to the dialogue of the robots?
EA: Mike Hopkins was the master of robot voices. He recorded all the actors’ ADR, and treated them with dialogue re-recording mixers Kevin O’Connell and Gary Summers on the stage. In the beginning, we tried doing what you’d expect from “Robot” treatment – processing the voices to sound really freakin’ awesome. Well, Michael hated it. We persisted on the second movie, giving Soundwave his classic “glass resonance” voice, but updated for the 21st century .That voice gave me goose bumps. But guess what? Michael hated it.
Something about processing voices just bugged him. Any short delay, any zangy tuning, any gentle waveform synthesis, any ANYTHING was shot down. Eventually, Mike found a happy medium. Typically, dialogue comes out center channel–but for Optimus for example, every channel including the sub is being used. For the big guys, we could deepen and widen and rumble the voices, but they all had to sound natural. Our first instinct was to go wild with the voices, but who knows, maybe less is more sometimes.
With the Decepticon alien languages, we had the green light to go crazier. We surgically edited human vocals with morphed, synthetic treatments to give the sense of evil data transmission.
Mike Hopkins had a method of extrapolating a language from the English-scripted lines and it turned out to mold well with sound effects. A lot of the language data sounds were made from a malfunctioning iPod that Ethan broke on the first movie. It made all sorts of insane garbled bytes of sound. We tried to make the morphing of human vocals and synthetic effects seamless.
DS: And your main tools/effects to tweak and get the transforming sounds? Could you give us an example of any crazy effects chain to make a particular sound or sequence?
EA: The original “Transformers” cartoon featured a very iconic transformation sound, the 5-beat rhythm of rising or falling splatty pitch.
To make this basic sound, I can start with anything; a 500 HZ tone, for example. Then I’d add a little “Metaflanger” in the ProTools, adjusting the rate to get a nice splatty sound. Then I’d automate a pitch rise from start to finish, so the tone ramps up. Then I’d add a tremelo pattern, and tweak that to get the 5-part rhythm. And that’s basically the original cartoon transforming sound.
For these movies, the transformations were usually a little more complicated than that, but the basic idea stayed the same: a multi-beat rhythm with an escalating or decelerating pitch.
We used lots of tools to achieve that. The very first transformation in the movies was for our evil helicopter Blackout. His transformation started with a weird drone that was basically an equalized medium delay applied to a slowed UPS battery buzz recording. As the copter’s rotors fold up, we hear a whining motor, made out of a very simple scanner servo with a pitch acceleration, followed by big shingy shears made out of sword blade slides.
As his transformation intensifies, we start adding big metal crunches as his big parts rearrange and lock into place. The main element for this was big ice crunches, recorded at 192 kHz, slowed to about 20% speed and fattened out with compression. The most important thing to sell the ice was to make it rhythmic — I picked a tempo I liked and matched the ice cracks to that tempo. We supplemented the ice cracks with shotgun cocks to sell the idea of pieces locking.
One specific plug-in story involved making sounds for “The Fallen,” where I was playing with a plug-in called Echoboy. I had taken some metal groans we recorded, and was running the sounds through different treatments on a ProTools insert and re-recording them onto a new track. I often like recording treatments in real time, as opposed to rendering them in Audiosuite, since I can do real time parameter adjustments.
As I was re-recording the treated metal groans, twisting and turning the plug-in knobs at random, the ProTools played over a missing fade which “popped” the plug-in in an unexpected way. It made this unreal ZAAAAAANG that turned out to be the Fallen sound I was looking for. Mistakes can sometimes be great opportunities.
DS: We already know about some great field recording stories such as the weapons recordings, the jets, helicopters, or lovely things such as the buzzing magnet balls… Could you tell us more about other sound sources recorded to have those great morphing and crazy sounds of the robots?
EA: Optimus Prime was designed with the concept of “air”: air brakes from a semi truck, pressure hose releases recorded in a Sacramento metal working factory and some foley air bursts all became the sources of Optimus’s signature air sounds. BMW car door latch clicks became his eye blinks. Dry ice on metal became his collection of energy groans.
His body movements were made from a scissorlift recorded on a construction site. The first week I started working on “Transformers”, I noticed renovation happening in the adjacent Technicolor Sound Services building. I’ve found that these great sound recording opportunities often appear serendipitously, and it’s up to us as sound designers to grab those opportunities while they’re ripe. No random whistling window, LAPD helicopter sweep or road construction jackhammer should go unrecorded.
Speaking of which, a jackhammer I stumbled across at Sony Studios became a Decepticon weapon in “Revenge of the Fallen’s” final battle.
For Bumblebee, I caught a fly in a tall cup at home, and shoved a microphone in to get sounds for his motors. My old home HP printer gave some good purring servo motors for his legs.
His splatty vocals came from a garden hose. I came home one night from work, parked in the driveway, got out of my car and stepped on my garden hose. It made this crazy liquid gurgle that sounded like transmitting data. I stepped again and its purr sounded like a creature. I thought: “I gotta record this now!” I recorded some sets outside, but the crickets were so loud that night that I knew I’d regret not getting cleaner recordings inside.
There’s a physics term called the “Heisenberg Principle”. The gist of it is that once you observe something, it changes. I was worried that if I disturbed the hose, it wouldn’t make that crazy sound anymore. I took the hose up into my apartment, threw it in the bathtub, and hoped it would still gurgle for me, minus the crickets. Fortunately, the hose was still squirting away.
I made Jazz using a dying battery-powered electric drill, vari-speeding it DJ-style to jazz it up.
Megatron’s legs motors were made using a palm frond that effects editor Chris Aud recorded. The recording was basically a stiff palm leaf scraping against a hard surface. It had a great, snake-like, hiss quality that made for a really bizarre robot leg motor.
Megatron’s vocals were also fun. Re-recording genius effects mixer Greg Russell lent his chesty roar to the track. We also used ultra-close up exhales and human purrs, manipulated to sound huge and evil, for his breathing, plus a few large carnivorous animal vocals.
DS: In “Revenge of The Fallen” there are bigger robots such as the giant Deception made by combination of constructions machines, the Fallen and the “optimized” Optimus Prime + Jet Fire at the final battle. What are the improvements and decisions to made it sound “bigger” and deal with that battle?
EA: It’s a funny psycho-acoustic phenomenon, that “small” sounds can be manipulated to sound bigger than “big” sounds. In sound design we often find the “macro” in the “micro”.
Devastator was such a cool bad guy from the original series. Frank Welker voiced him, and we did some pitching, processing and widening to help him fit into the colossus. Frank has an amazing ability to rattle his vocal chords in the greatest way, so that there’s always definition no matter how you twist his voice.
For Devastator, we also did a lot of metal recording. My home washer-dryer turned out to be a fantastic source of sounds. Slamming the dryer door became his huge thunderous footsteps. The clanking of my friend Helen’s set of workout machine dumbbells became his huge clattering movements. For his leg motors, my voice, twisted into a deep electric pulse, gave him alien power and energy.
Jetfire is a rusty, aging and ancient old robot. For him, we built a homemade windchime using shards of aluminum, saw blades and a monkey wrench attached to strings. We clanked them together, and used the ringing metal for the aging robot’s beard of swinging metal. His legs and arms were made from my creaky oven door.
The “optimized” Optimus Prime gets the ability to fly after Jetfire sacrifices himself. His flying sounds were made from a very small source: fireworks. His rocket jetpack sounds were made from a fireworks fountain recorded in my driveway. With eye and fire protection I could record the fireworks up close for a nice, rich up front sound. Some cracklers gave an edgy element that I dopplered to make some vicious flybys out of.
DS: There are a lot of scenes with really challenging mix, which is another amazing aspect of the sound of Transformers. How was the relationship with Greg Russell (Sound Effects Mixer) to group and choose the sounds and the key elements of each sequence?
EA: I can’t compliment Greg enough. Beyond his incredible mixing ability, he brings an infectious joy and energy to the process. Working with him is plain fun.
After long and exhausting hours of work, it wouldn’t be surprising to hear him break into spontaneous song. On the first movie it was “Give it to me one more time” by Captain And Tennille. Late in the night on “Revenge of the Fallen” he’d break into “All Night Long” by Lionel Richie. I’m curious what Transformers 3’s song will be.
Before mixing, we edit the track to sound as good as it can. We do not include clutter or multiple options, just the sounds we want to hear. We editorially carve out space for clarity, making those kind of decisions before hitting the mix stage. There just isn’t enough time to waste on an expensive mix stage weeding through thousands of superfluous elements.
For Transformers, our basic FX predub groups were:
1-4 : “A FX” : doors, hatches, grabs, beeps, etc.
5-9 : Weapons, explos, impacts, bullet ricos, whizzes
11-14 : Vehicles, aircraft
15-16 : Robot mechanics
17-18 : Robot motors
19-20 : Robot Footsteps
21 : Robot surface textures
22 : Robot weapons
23 : Robot breaths & alien vocals
We bring our tracks already organized into predubs to the mix stage. Greg can then take those tracks to the next level. Greg doesn’t “choose” the sounds, but somebody visiting the stage shouldn’t be surprised to hear Greg mouthing weird vocals to describe an idea for a doppler missile shrieking by: “FFFFFWWWWAAAAAARRRRR!!!!”
Over a month of predubbing, Greg works reel-by-reel to tweak balances, create clarity, find places to punch it up and create size, do all of the insane panning you hear, and carefully and methodically build all of the predubs into 6-channel (5.1) chunks.
Once these predub mixes were all built, we took them to the final stage and, with dialogue/music mixers Kevin O’Connell on the first film and Gary Summers on the second, integrated music and dialogue into the equation over another month of final mixing. At this point, Greg and I work closely together on the effects, adding new sounds as needed from a ProTools stage “fix rig”, through Greg’s console, and into our mix. We’re doing this until the very last moment when we printmaster.
DS: I love the use of silence in Transformers, especially in Revenge of The Fallen. How do you decide when use this silence and when not? How does the silence help in a specific scene?
EA: There can be no light without dark. My favorite paintings are by Rembrandt and Caravaggio, who practiced chiaroscuro, the art of contrast. Dark blacks, contrasted by shafts of bright light. I like sound that has the same dynamic: quiet and intimate, versus bold and intense. Frequencies that go low-end to high-end and back again. Some of my favorite music has the same shape: Mahler, John Adams and “Boards of Canada”, who does it in a modern way.
Quiet scenes help cleanse the sonic palatte. I love them because they are seductive. They create a sense of ease, of calm, that can quickly turn on its head.
One of my favorite scenes is the “hut scene”. Sam and Mikaela are hiding out from Decepticons on the hunt. It’s the silence that I like. We tried to get very quiet, so we could hear the terrified kids trying to suppress their breaths and not be heard. We wanted the audience to hold their breaths too. We go as quiet as we can, before Starscream rips the roof off of with a BANG! Dynamics are the key to both storytelling and sound.
It’s fun to make audiences lean in, have them strain to hear something, and then give them a jolt. I like this kind of filmic emotional manipulation, and I think anyone who enjoys a ride on a roller coaster does too.