Steven R. Livingstone, Ralf Muhlberger, Andrew R. Brown, and William F. Thompson. Changing Musical Emotion: A Computational Rule System for Modifying Score and Performance. Computer Music Journal, 34:1, pp. 41–64, Spring 2010.
The authors present CMERS, “a Computational Music Emotion Rule System for the real-time control of musical emotion that modifies features at both the score level and the performance level.” The paper compares CMERS to other computer-based musical expressiveness algorithms, as part of a larger effort to find a complete systematic categorization of all of the emotions that can be expressed and evoked through music.
The authors first conducted a survey of past efforts to categorize emotions, and after meta-analysis of the results, devised a two-dimensional graph. The vertical axis runs from Active to Passive. The horizontal axis runs from Negative to Positive. The Negative/Active quadrant includes such emotions as anger and agitation. The Passive/Positive quadrant includes serenity and tenderness. The authors then paired particular musical devices with each emotion, both compositional and performative. For example, sadness is correlated with slow tempo, minor mode, low pitch height, complex harmony, legato articulation, soft dynamics, slow note onset, and so on.
Having established a rule set linking musical devices to emotions, the authors encoded the rules into a set of MIDI filters. These filters were used to generate computer performances with the desired emotional quality. The authors used computer rather than human performers because they wanted a ground base of perfectly flat affect, and very fine control over performative nuance. (A human performer in a good mood will find it difficult to convincingly convey despair, and vice versa.) Finally, the authors played the performances to students and asked them to locate their emotional response on the two-dimensional graph. They found very strong agreement between the intended emotion of a given piece and the students’ ability to identify that emotion.
I approach the authors’ entire effort with considerable skepticism. They have shown that within a given culture and narrowly-defined style, it is possible to identify broad-stroke relationships between particular musical devices and particular emotions. Within the goals they have set for themselves, they have succeeded quite admirably. But no unambiguous categorical system can possibly capture the bottomless complexity and nuance of emotion. A listener’s reaction to a piece of music will depend heavily on social context, personal history and education, and countless other intangibles. The state of the listener’s digestive tract is at least as important in determining their emotional responses as anything happening between their ears.
The authors focus their research on common-practice era Western classical music. This makes their task easier, since Western classical is centered around scores that easily translate into MIDI, and that follow a comparatively narrow rule set. The authors are conscious of this limitation and discuss applying their system to music of other cultures, with rule sets altered accordingly. But it is not necessary to look outside of America to find music whose features would defy ready emotional categorization. As I type this, I have James Brown in my headphones. He’s screaming in what at first blush sounds like rage and pain. Yet the overall result of hearing him is powerful emotional uplift. His music expresses conflicting emotions simultaneously: joy and anger, tenderness and aggression. That tension and complexity is the main appeal of James Brown’s music for me.
While I support efforts to find a deeper understanding of how music conveys and evokes emotion, I am not convinced that a reductionist approach ultimately contributes much of value. It would be better to embrace the full complexity, attempt to trace as many causal threads as possible, and be humble in the face of the ultimate impossibility of the task. For musicians, meanwhile, the best method for understanding how they can convey emotion to listeners is simply to practice and perform, to be attentive to the mood in the room, and to learn by experience.
I understand your skepticism but we have to start somewhere, right? Starting with the full on complexity would be WAY too difficult. It’s better to take these things one baby step at a time.