Public-facing note taking on Music Matters by David Elliott and Marissa Silverman for my Philosophy of Music Education class.
Research into music psychology (and simply attending to your own experience, and to common sense) shows that music arouses emotions. However, there is no conclusive way to explain why or how. To make things more complicated, it’s perfectly possible to perceive an emotion in a piece of music without feeling that emotion yourself–you can identify a happy song as being happy without it making you feel happy. Music and emotion are inextricably tied up with each other, but how does music arouse emotions, and how do emotions infuse music?
Elliott and Silverman summarize some major philosophical theories of musical emotion (or lack thereof).
Eduard Hanslick formulated an “objective,” “scientific” theory of beauty of pure instrumental music that has been the center of music education tradition for the past 150 years. For Hanslick, musical structures represent the dynamic aspect of emotions. These structures might also arouse emotions, but that is beside the point–the music is beautiful whether it arouses emotions or not. For Hanslick, music’s value lies in the Platonic realm of pure thought, not the world of grunting hairy ape bodies. If music is a set of abstractions like geometric proofs, then we can separate musical concerns from social or ethical concerns.
Susanne Langer extends Hanslick’s idea that music symbolizes feelings without necessarily being expressive of them. For Langer, music documents our inner lives but does not play much of a role in shaping them. Musical structures are symbols that are isomorphic to particular feelings, which we can arrange to paint a logical picture of sentient life. The idea that music can capture and portray inner life more clearly than language is an appealing one. But to say that music is only commenting on emotions without arousing them runs counter to a mountain of research and observation. Also, there is no inner life that’s meaningfully separable from outer life.
For Leonard Meyer, emotions in music come from the gratifying or defying of expectations. You hear a dominant seventh chord at the end of a phrase and feel tension; you hear the tonic at the beginning of the next phrase and feel resolution, or you hear a key change other chord and feel some other emotion. This is a theory of how emotion gets encoded in music, not of how or why the listener can perceive or experience that emotion. Meyer’s theory focuses entirely on structure, and glosses the over link between musical structure and everything else. He does, however, find room for referential music, to the point where he argues that all the really great music must refer to something out in the world. Meyer seems not to have much to say about anything outside of Euroclassical tradition, though I suppose you could extend it to other cultures using that culture’s own set of musical expectations. Expectation certainly plays a role in Western music and probably in all music, but is it sufficient to explain all of musical emotion?
Peter Kivy’s delightfully named “doggy theory” distinguishes between expressing an emotion and being expressive of an emotion. A basset hound’s droopy face is expressive of sadness, whether or not the dog is feeling actual sadness. By the same token, we can hear sadness in the oboe melody of Bach’s Brandenburg Concerto No 2, even though neither we nor Bach nor the oboist ever feel any sadness ourselves. Like the other philosophers discussed so far, Kivy is a cognitivist: he sees music as a thinky experience, not an emotional one. He points out that no one is weeping in the concert hall during the Brandenburg Concerto. But the world of music is much bigger than the concert hall; maybe the concert hall is a weird outlier, and the “normal” thing is to be emotional about the music. Young children most certainly respond to music with outsize emotions. I have seen a kid literally shriek in terror from a loud diminished chord in a film score.
Stephen Davies thinks that music encodes emotions by imitating the speech, facial expressions and body movements we produce when we feel those emotions. Listeners pick up on musical emotions the same way we pick up on regular emotions in our social interactions, via mirror neurons and empathy. This is a plausible idea as far as it goes, but it still locates musical emotion within the structure of the music itself. We need a theory that takes in the entire social and cultural context of the music.
Elliott and Silverman propose the beginnings of a theory that aligns better with the observed facts of music psychology. The listening self, like the self generally, emerges from the complex interactions between individuals’ brain-body processes and interpersonal dynamics–cultural, historical, and political. A complete account of the emotional experience of music needs to account for at least eleven mutually interacting processes:
- Brain stem reflexes–the first stage of auditory processing, the broad-stroke patterns of bodily arousal from hearing musical sound, e.g. the startle you get from an abrupt loud entrance.
- Rhythmic entrainment and synchronization–lining yourself up to a beat, often totally unconsciously.
- Evaluative conditioning–the pairing of music and a social/emotional association, e.g. feeling seven years old whenever you hear the Star Wars theme. It’s not simple Pavlovian conditioning; there can be a reflective aspect too.
- Emotional contagion and mirror neurons–as far as the mirror neurons are concerned, happy music makes us happy, and sad music makes us sad, because we imagine the performer as happy or sad and we pick up on it. This the reason we sing to babies, because we want them to feel that we love them.
- Associations, autobiographical memory, and episodic memory–Your specific, idiosyncratic set of associations. I think of one particular bar gig where I played a sample of the opening fanfare from “A Love Supreme” and two strangers got out their saxophones and improvised ecstatically along with it. I think of that whenever I hear the fanfare; few other people do.
- Expectancy–as per Leonard Meyer.
- Cognitive monitoring and naming–the slower conscious evaluation after the initial reflexive reaction has passed. This could explain our reaction to the kind of difficult music that requires multiple listens to warm up to, the way my initial strongly negative reaction to Coltrane’s recording of “My Favorite Things” gave way to curiosity, and then awe.
- Visual-musical interactions–the seemingly mysterious pictures we get from sound. Are these pictures really so mysterious, though? Before recorded sound, all music was live music, and that meant some kind of visual spectacle was intrinsic to the experience, even if it was just the sight of someone standing there performing. Music videos are just restoring normalcy (at least, a hallucinatory fever-dream version of normalcy.) There is probably a close connection between musical visions, drug hallucinations, and dreams; perhaps an explanation of one will explain the other two.
- Corporeality–we listen from the neck down as much as from the neck up. Rhythm entrainment is inseparable from clapping our hands and stomping our feet, which tends to arouse emotions.
- Musical persona–We imagine the performer/composer making the music, and our relationship to them. This is easy when we listen to Paul McCartney sing “Yesterday”, but quite a bit harder when you listen to experimental electronic music made by anonymous producers. Still, no matter how abstract the music, it is plausible to imagine a persona attached to it.
- Social attachment–Music as facilitator of parent-child bonds, adult-adult bonds, and tribal bonds. We can extend that idea out to encompass religion, theater, courtship, nationalism, and much else. Music serves most of the same functions as religion in Émile Durkheim’s theories: a totem or rite that gives a physical form to the social life of the tribe.
- Principle of charity–The kind of idealized ethical personhood we extend to everyone, even if they can’t fill the role. We still extend “honorary personhood” to babies, severely disabled people and pets. By the same token, we can also extend it to music. So I can feel not just affectionate toward a song, but also protective of it, persuaded by it.
Elliott and Silverman advocate not just for an embodied theory of musical experience, but an enactive one. This idea is explored in depth in Juan Loaiza‘s draft, Musicking, Embodiment and Participatory Enaction of Music: Outline and Key Points. To give a full account of embodied music, we need to account for the way that a person is an extended and embodied bundle of systems of systems of systems of processes, not a thing. Musicality and musical systems comprise not just the organization of sounds, but also the organization of human relationships that give rise to those sounds, and the relationships informing their perception.
Christopher Small’s use of the verb “musicking” establishes that music exists within a set of relationships, and that the meaning of the music can be found within those relationships. (This is another rhyme with Durkheim’s theory of religion.) Relationships exist not only between sounds, but also between the people who are taking part in the musical experience in whatever capacity. Musicking is not just an outgrowth of social life; it is also a tool for building social structures. Simon Frith writes that you don’t just express your identity through music fandom; you use fandom to actively construct it. The difference between fandom and religion is one of degree, not of kind.
Tia DeNora observes that music is a “technology of the self,” or alternatively, an affordance of the self. Music is a way to actively manage our own body states, and those of others. We do this when we sing to babies, but we also do it when we use music for pain management or to motivate ourselves at the gym. While these last two use cases are individual in nature, neither one would work in the absence of a social meaning for the music. There really is no clear separation between individuals and their social context; individuals arise out of social intersubjectivity.
Networks of human activity, rule systems and institutions are not a static backdrop for our cognitive and emotional activities. They are dynamically shaped by our thoughts and feelings, just as they dynamically shape our thoughts and feelings in return. Rather than trying to locate musical meaning within a particular brain/body, or within the structure of the music itself, we need to look holistically at the entire network of processes surrounding the music and give rise to it, and we need to track them as they evolve over time. This is not a small task, but we do not appear to have a choice. Social interactions transform the participants’ sense-making in ways that are inaccessible to each individual on their own. Social life has a life of its own.
As Loaiza puts it, “Musicking is a process of individuation as sonic surfaces and epistemic subjectivities constantly come into being together. The global is brought to bear on local phenomena and both global and local co-realize.” We need to establish a continuity between levels of musical analysis by connecting social and cultural relations, socially-informed cognition and body processes as a messy entanglement of co-emergent processes.