Following up on David’s previous post, the excerpt from his Sound Sphere article, he and I had a conversation over the phone to go into a little more detail. Here’s the full transcript.
David Sonnenschein: So, I know you read the excerpt that was posted for Designing Sound, but have you read the whole article as well? [ed. full article appears in “The New Soundtrack” volume 1, issue 1]
Designing Sound: Yes, I read the whole thing, but it’s been a few days and the two are kind of merging in my mind. If I remember correctly there were a few more examples…
David: Basically, yeah. It has more examples, and it also has a section of applying this model more specifically to film sound. So it was bit more detailed, and there was some exploration of where it could go…some possibilities of expanding it into other arenas. The earlier section was also relating it to previously established models. It just kind of expounds a bit more in an academic way on the whole issue. So, we can just talk about some of those things in general, or specifically if you want to. Or there are other questions you’ve mentioned that are really pertinent. And people can read more in the article itself. It’s available some place else. So I like this idea of talking about it in ways that are a little bit new.
DS: Well, I’d like to touch on some areas that hopefully will prompt people to go and check out “The New Soundtrack.” It really is a great journal in terms of film sound, aesthetics, and the ways in which people are analyzing the kind of work we do. I highly recommend people check it out.
David: Yeah, and that whole article eventually will appear in my second edition as well.
DS: Right, although by then I’m sure you’ll end up doing some more revisions and adding stuff or removing others…
David: Probably, yeah. I mean, I’ve already thought about things that need to be added. In fact, I will be presenting a paper in June at a conference called “Electrified Voice” in Konstanz, Germany. That whole conference is regarding the reproduction of voice in Radio, Theatre, Film…and my presentation will adapt the Sound Spheres model specifically for the human voice. That touches on one of the questions you wanted to talk about, right?
DS: Yes,and I think we’ll go back to that…but for now, why don’t we just go into this in a more general way. It’s kind of funny, because there have been a few people who’ve commented on it. And in effect they say, “You know, I’ve never really thought about it in this way before, but it makes perfect sense,” and it does make perfect sense. It is a fairly simple model; that’s at the same time, very robust as an analytical tool. That’s always one of the tings that makes an analytical tool successful; when it’s a simple idea that can be applied in a broad spectrum.
I could easily see this applying to something other than film sound, such as games…obviously, because that’s become another medium that’s highly focused around story telling…radio dramas, stuff of that nature.
David: Absolutely, the simplicity comes from a real world basis. It’s not just a theoretical model, but it’s something that happens to us.
DS: It’s experiential.
David: Exactly! So, all of these different media, whether it’s radio or games, they’re all taking a model of sorts from real life experience and creating something that’s either fiction or documentary, whatever it might be that has a message and has some sort of involvement with the viewer. This is what the experience of the Sound Spheres is about. It’s about a point-of-view, and how we perceive our world through sound. That really is why it’s so “robust” as you said, because it holds with our real world experience.
DS: Yes, and I really do see this as more of an analytical tool than anything else. I mentioned in our e-mail correspondence prior to this conversation that I see this as having two, at least from my perspective, two main…or general…applications. One being for analysis of the use of sound in a finished product; whether it’s a finished game, movie or what have you. The other being analysis of the sound needs of a product.
David: Yeah, and I would even say, more than analysis of the needs, it produces opportunities and results. So it’s not just theoretical, in terms of diagnosis and analysis, but it actually creates the opening for things to happen. I’ve seen this happen over and over again in my classes: where students will bring in an idea or an experience within the Sound Spheres, and then we develop scenes that take place within their experiences. So, it’s a tool for actual creativity not just analysis, and that’s my experience.
DS: I think a better way for me to word what I had been thinking of, because I’m pretty much on the same wavelength, instead of “needs” I should have said, “to analyze or identify possible uses.” That’s how I see this being applied to a product in process. So you can identify needs, but this can help you identify how to approach those needs.
David: Yeah, and finding solutions to them. And I’d say parallel to that, my Sound Mapping tool has been very useful for tracking transformational movement in characters and emotions throughout the storyline, in a feature script for example, and that comes from a basis of storytelling that goes back thousands of years; even to Aristotelean structures. And certainly screenwriters have been at it for many decades, and I’m piggy-backing on that model, and that robust structure…the three acts, for example.
So, the Sound Spheres, in a similar way, has got a lot of guts behind it. For example, Michel Chion’s work with “on-screen/off-screen,” non-diegetic sound…all these fit very well within this structure. So, I think it was important to relate them in the full article. That way we can see how this has a strong foundation.
DS: So, do you then see your Sound Mapping, which is covered in your book as, well as your classes and webinars, and the Sound Spheres as being complimentary ideas? Tools that could be used together and intertwined? That might be something that would be interesting to cover in the second edition of your book.
David: Totally. They’re very complimentary. A similar kind of analogy would be to say that cinematography is complimentary to costume. You need them both to tell the story visually. So these are two different approaches about structuring sound that certainly fit together. Never are they mutually exclusive. They simply approach from a different aspect.
In one case, the Sound Mapping is about looking more at the full story structure, from beginning to end, and relating that to the specific dramatic conflict. Those would then be related to the bipolarities and the sound qualities. So you are assigning sounds to the protagonist and antagonist in that way. What I aim to do with the sound mapping is to unite the areas of physics of sound qualities…like volume, pitch, timbre, rhythm, etc…with the technical means to manipulate these parameters in sound processing and mixing, then relate the extremes, or what I call bipolarities, of each of sound quality…like loud-soft, fast-slow, etc…with the bipolarities in the dramatic conflict. So the story elements, characters and emotions get correlated to the sound qualities, and as these dramatic elements transform over time, so do the sound qualities. This gets mapped on a timeline of the story and shows the bipolarities of both story and sound in a graphic presentation, easy for everyone to follow.
That is parallel to how you dive into any one particular scene, and experience the sound as driving the story and attention of the audience through one sphere or another. So, they’re like dimensions.
As I mentioned, in production design you have cinematography, costume, makeup, story boards, camera movement, lighting…all these things are elements of production design. These would be different elements of sound design.
DS: Well, as far as elements go, and this again is something I brought up in our e-mail conversation leading up to this, those two general applications we’ve been talking about, I see this working on three specific elements or areas of a story-telling medium: a character focus, an overall narative focus, and a self-referential or self-commentative focus. What do you think of that? Am I missing anything there, because I see those three as shared subsets of both of the previously mentioned “general” applications?
David: Yeah, and I would maybe add the genre as well, which may be a subset of the narrative. Certain genres are going to exploit the Sound Spheres in differnt ways. For example, horror films are going to go into the “I Don’t Know” sphere quite often, because there’s a lot of fear that can be generated through surprise. At the right moment, you can make them jump out of their skins, because there’s something and we don’t know what that is. In a straight drama, there may be less of a visceral impact with having that “I Don’t Know” sphere being pushed in that way.
So, I’ll go back to the three focuses that you’ve mentioned here. I would say that character is certainly one of the most interesting ones to look at, because we define character so clearly about what the experiences of that character are, as well as how people see that character from the outside. So, when the character’s in the first person point-of-view we can go inside their head and hear what they’re imagining through the “I Think” sphere. With the “I Am” sphere, we also get very intimate with them through their breathing and their heartbeat, a very traditional use of intimacy for character development. Then there are all kinds of other sounds as well, relating to mouth sounds like sneezing, or burping, or yawning, or any other kind of body sounds. Someone scratching their own head for example. Normally, you wouldn’t hear that. But if all of a sudden you bring that sound in, it’s like a first person point-of-view. I touch my own head and I can hear that scratch. You might not be able to hear it so well, but in a film you can pull us into that, that much more. So, from the outside, of course, it’s much more of the “I See” sphere. I see a character talking, or making sounds. So I identify with them. So that’s one of the strong points that I see of this model for character.
For narrative structure, I see that moving between the spheres is something that creates a dynamic. Where the sound goes, for example, from “I think,” I’m imagining it, to “I See” it. All of a sudden something has manifested into my world from my mind, and now it’s on the physical plane. Maybe then it becomes evident for other people to feel in the story also, who hadn’t been able to see and hear this. People or characters who hadn’t been able to when it was only in the mind. So, it brings it to an expansive impact in the narrative. As I mentioned, something being in the “I Don’t Know” sphere, drives the characters and the drama to find out, “What is that sound?” It could drive it towards fear, towards comedy, towards a general curiosity to turn your head…so it impulses the action itself. A lot of drama is cause and effect, or action and reaction. And so, if you have that going on in the Sound Spheres, moving from one to the other, it can serve the drama if you do this consciously.
The self-referential and self-commentative, I think that I’ve spoken a little bit about it, in terms of point-of-view, but maybe you can define that idea for me a little more clearly.
DS: There are situations where the film itself becomes kind of like another character, or the film gives us the impression that we’re witnessing things from the point-of-view of another character who’s never seen or heard. I can think of a lot of comedy movies as examples of this; particularly anything by Mel Brooks. For instance, if you watch Space Balls. The breaks the fourth wall all the time. It’s very aware that it’s a spoof of Star Wars, and makes use of a lof of the tropes that were established by those films. The Wilhelm Scream shows up in that film, in a not so flattering situation. [ed. image-wise, not sound-wise.] An impression of, or allusion to, the Wilhelm Scream shows up at the end with the big lightsaber/Schwartz battle; where the camera pans over and they accidentally kill the “boom op.” And the “boom op” gives, what sounds like, his best impersonation of the Wilhelm Scream as he dies. The film knows, and is saying, “I’m a Star Wars spoof. I know I can use these sounds to create a comic reaction here.”
David: Right, I have another example, which is in the Monty Python film with…I think it’s Marty Feldman…using the coconut shell for the horse’s gallop sound. We hear the sound first, and then the shot goes wide and we see him using the shells and just walking along with the knight; not on a real horse. I think that was a wonderful use in comedy. The idea of it going from a sphere, “I Know.” “I know that’s the sound of a horse trotting along,” and then the camera opens up. All of a sudden, it flips you into the “I Don’t Know,” and then just as quickly “I Know,” because now you know what it really is. So the joke really, is on the audience for having made an assumption. Another model that applies here, and that’s what I call referential listening. We hear something that within the context, of people dressed as knights in the middle ages, and we hear the sound “clop clop, clop clop,” and we think they’re on a horse! We’ve seen that so many times, that that must be what it is. Then it’s just a play on that idea.
So, that I guess, is another way that we can look at the Sound Spheres; how we play with movement between them.
DS: Yeah, and your mentioning the play on the audience there kind of brings up another idea that I had while reading your article. Another focus within the application of defining or finding uses for sound in a production, it would be a fair bit more limited, but you could apply the Sound Spheres as well to the audience. It could be useful in the context of what reactions you want to engender in the audience. Obviously, there’s a little bit of ambiguity in there, because we can’t determine exactly what they’re thinking or how they’ll react…but we can predict. And the transition we keep going back to, moving between the “I Don’t Know” and “I Know” spheres, you can potentially predict some interesting effects within the audience using this model.
David: Yeah. For example, we can have a character seeing something on screen, but the audience doesn’t see it. So, the character may know what the sound source is, but the audience doesn’t. Another film that reminds me of this is a Dutch film, made around 1980, called The “Illusionist.” This film was about two brothers, one of whom was comitted to an insane asylum, and the other is wanting to find his brother. He goes to the asylum, but his glasses, which are the thick “coke bottle” glasses, he loses. So, the audience is given this image of these people in the asylum making sound doing certain actions, from the point-of-view of the near blind brother. We return to the same scene at the end of the film, and now he’s got his glasses. The actions that are being taken are totally different, but they make the same sounds.
So that’s an example of “I Know” what those sounds are, even though I can’t see…or “I See” the sounds, but they’re incorrect. The audience doesn’t know that they’re incorrect until later in the film.
DS: Maybe you need an additional sphere called “I Believe.”
David: [laughs] Yeah. Well, in fact, there is a lot of…gradations, let’s say…between “I Know” and “I Don’t Know,” and I’ve been realizing that as I work more and more with this. For example, with the voice, you could know that it’s a human voice as opposed to an animal, but you don’t know what they’re saying. Or you could know that they’re speaking, but you don’t know what the language is, so you can’t understand the words. Or you understand the words, but you don’t know who it is that’s speaking. There are so many levels, just within the voice, that we could look at this model as having many, many, sublayers. So this “I Believe” is a really interesting point that you brought up, because it’s very dynamic…especially when you reveal to a character or to the audience, something that has not been true.
Personally, I love working in films where I can play around with that. In fact, any sound designer worth his salt, is going to use sound effects from their own recordings or libraries that is not from the original source of what you’re seeing on the screen. So you’re already playing around in make believe. I believe that that sound is a door slam, but maybe that was a gunshot when it was originally recorded. So, there’s a lot of stuff that leads into many other areas of sound design and other theoretical models as well.
DS: What do you identify as some of the limitations of the Sound Spheres model, as it exists at this moment. You and I have spoken in the past about this presentation that you’re going to be giving in Germany. Why don’t you talk about the particular example you’ve been having trouble finding, and maybe our readers can help you out. So, why don’t you touch on that and any other potential limitations you see with this model right now.
David: Well, specifically in the “I Touch” sphere, I was having trouble finding what the “I Touch” sphere would be in terms of voice. Before that came up, I had only been thinking like, when you’re touching the ground with footsteps. Anything with foley, like the sound your hands make while typing on the keyboard, fits into that sphere nicely, but what about voice? One of the clips I’ve found on youtube…and it’s just absolutely fascinating…Helen Keller, the blind and deaf woman, was brought to be able to communicate with her own voice by Anne Sullivan.
I found a documentary clip of the two of them together, showing how the vibration of Anne Sullivan’s voice was passed to Helen Keller’s hand, placed in different areas of the mouth and throat…so fascinating. So I have a really good example in fact, finally.
[youtube]http://www.youtube.com/watch?v=XdTUSignq7Y[/youtube]
DS: That is an excellent example!
David: Amazing! I’m very open to people contributing other suggestions though, particularly in relation to that “I Touch” sphere for the voice.
DS: It’s interesting, because you mentioned this conundrum to me a few weeks ago, and I just now thought of another example as soon as I asked you that question…Stephen Hawking.
David: Oh yeah! Yeah!
DS: Because his voice is generated by his computer…and specifically, he vocalizes by typing into his computer.
David: So that would be a touch on the keyboard that’s creating vocal. Yeah, very interesting.
DS: But as I mentioned, maybe someone reading will be able to give an actual film based example for you.
David: I’d like to hear anything people have to suggest, and any other interesting uses of voice. Right now I’m collecting “I Am,” and one of the clips that I’m using is a beatboxer. VERY interesting sounds come out of that little kids mouth. Sometimes they use their hands up agains their mouths to modify the sounds; make it sound like a percussion instrument or something like that. Very interesting stuff. It’s a lot of fun.
In general, I’m seeking to use my own experiences in the world…as I mentioned, this came from my own meditations and awarenesses…and looking to codify them, and make them applicable for teaching and for actual design…to help people produce sounds that are more impactful and deeper meaning for the audience.
DS: That’s a perfect transition point for me to bring up one of the comments someone left on the Sound Spheres article here on DS. João Nunes said, “I started thinking how the sound spheres could be related together and attract people more and more into the action in a film scene.”
David: Yeah.
DS: So, you made at least one person start think more about that. It’s also interesting that the quote specifically talks about the “action” in a film; what’s taking place, what’s going on…and pulling people into it.
David: It’s true. The idea of, especially the “I Don’t Know,” where someone has to fill in something in their mind, it becomes much more interactive; even for a linear film that’s being projected. If the audience is drawn in by participating in their own minds to create something that isn’t fully handed to them on a platter…they become more involved in the story telling. They will often feel something stronger once they have that involvement. So, I think that this model is just one more tool, amongst many, and it’s a growing, developing tool. I’m looking forward to hearing people’s feedback. Any ideas, questions and comments are extremely welcome.
DS: Well, I’ll remind people that if they do have questions or comments, to pop into the “Your Questions to David Sonnenschein” post to have you address them directly. [ed. that would be this post here.]
David: Wonderful, and if you’d like to mention that in the next few days I’ll be doing another interview on the interactive media, and a sound game I created, with Damian doing that interview [ed. Done and done.]
DS: Well thanks, David, for taking the time, and I’m looking forward to not just your interview with Damian, but the next one that you and I have scheduled for later this month.
David: Yeah, this isn’t the end. For sure. Talk to you soon.
Erik Bruhwiler says
I think the Sound Spheres is a stroke of genius, so I am happy to throw some possibly helpful ideas/suggestions/examples of voice-related “I touch”. These may be similar to what has already been discussed, but sound design is all about using inter-related sounds, right? Some of these may be stretching the idea.
-Voice in a microphone (amplified) can be felt by the audience.
-When some people talk they spit (on the listener.)
-Whispering in someone’s ear often can be felt as breath, sometimes warm and/or moist.
-When my months-old baby boy cries, it often literally hurts my ears, his voice is so piercing (this can also be related to the classic opera singer breaking the wine glass with a resonating note.)
-When someone is describing food or drink that they are ingesting – as they are ingesting it – you can almost feel the food in your own mouth
-Phone sex has sometimes made the participants feel as if they were actually in the same room together (startlingly real.)
-Darren Brown (mind controller) or other hypnotists seem to gain control over the actions of others with their voice/commands (my baby boy gains such control over his parents with his baby cries/commands/demands.)
-Voice commands on the Star Trek ships allows people to control the ships computer.
-Voice is almost always involved in seduction
David Sonnenschein says
Eric –
Thanks for the many great suggestions. I’m going to look for a few video examples of them to include in my presentation next month in the Germany conference: http://www.uni-konstanz.de/electrified-voices/