Can science make a better music theory?

My last post discussed how we should be deriving music theory from empirical observation of what people like using ethnomusicology. Another good strategy would be to derive music theory from observation of what’s going on between our ears. Daniel Shawcross Wilkerson has attempted just that in his essay, Harmony Explained: Progress Towards A Scientific Theory of Music. The essay has an endearingly old-timey subtitle:

The Major Scale, The Standard Chord Dictionary, and The Difference of Feeling Between The Major and Minor Triads Explained from the First Principles of Physics and Computation; The Theory of Helmholtz Shown To Be Incomplete and The Theory of Terhardt and Some Others Considered

Wilkerson begins with the observation that music theory books read like medical texts from the middle ages: “they contain unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases.” We can do better.

Standing waves on a string

Wilkerson proposes that we derive a theory of harmony from first principles drawn from our understanding of how the brain processes audio signals. We evolved to be able to detect sounds with natural harmonics, because those usually come from significant sources, like the throats of other animals. Musical harmony is our way of gratifying our harmonic-series detectors.

Continue reading “Can science make a better music theory?”

Toward a better music theory

Update: a version of this post appeared on Slate.com.

I seem to have touched a nerve with my rant about the conventional teaching of music theory and how poorly it serves practicing musicians. I thought it would be a good idea to follow that up with some ideas for how to make music theory more useful and relevant.

The goal of music theory should be to explain common practice music. I don’t mean “common practice” in its present pedagogical sense. I mean the musical practices that are most prevalent in a given time and place, like America in 2013. Rather than trying to identify a canonical body of works and a bounded set of rules defined by that canon, we should take an ethnomusicological approach. We should be asking: what is it that musicians are doing that sounds good? What patterns can we detect in the broad mass of music being made and enjoyed out there in the world?

I have my own set of ideas about what constitutes common practice music in America in 2013, but I also come with my set of biases and preferences. It would be better to have some hard data on what we all collectively think makes for valid music. Trevor de Clerq and David Temperley have bravely attempted to build just such a data set, at least within one specific area: the harmonic practices used in rock, as defined by Rolling Stone magazine’s list of the 500 Greatest Songs of All Time. Temperley and de Clerq transcribed the top 20 songs from each decade between 1950 and 2000. You can see the results in their paper, “A corpus analysis of rock harmony.” They also have a web site where you can download their raw data and analyze it yourself. The whole project is a masterpiece of descriptivist music theory, as opposed to the bad prescriptivist kind.

Jimi Hendrix, common practice musician

Continue reading “Toward a better music theory”

Analyzing the musical structure of “Sledgehammer” by Peter Gabriel

We’re asking participants in Play With Your Music to create musical structure graphs of their favorite songs. These are diagrams showing the different sections of the song and where its component sounds enter and exit. In order to create these graphs, you have to listen to the song deeply and analytically, probably many times. It’s excellent ear training for the aspiring producer or songwriter. This post will talk you through a structure graph of “Sledgehammer” by Peter Gabriel, co-produced by Peter and Daniel Lanois.

Here is the video version of my analysis:

Below is the musical structure graph. Click the image below to see it bigger, and with popup comments.

"Sledgehammer" structure graph

Here’s the perceived space graph:

"Sledgehammer" perceived space

And here’s a chart of the chord progression.

Continue reading “Analyzing the musical structure of “Sledgehammer” by Peter Gabriel”

Teaching audio and MIDI editing in the MOOC

This is the fifth in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first, second, third and fourth posts.

Soundation uses the same basic interface paradigm as other audio recording and editing programs like Pro Tools and Logic. Your song consists of a list of tracks, each of which can contain a particular sound. The tracks all play back at the same time, so you can use them to blend together sounds as you see fit. You can either record your own sounds, or use the loops included in Soundation, or both. The image below shows six tracks. The first two contain loops of audio; the other four contain MIDI, which I’ll explain later in the post.

Audio and MIDI tracks in Soundation

Continue reading “Teaching audio and MIDI editing in the MOOC”

Teaching expressive use of audio effects in the MOOC

This is the fourth in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first, second and third posts.

After PWYM participants have tried mixing using just levels and panning, the next step is to include audio effects for additional audio manipulation. As a painless introduction, you can load any track from SoundCloud into our own miniature web-based effects unit, #PWYM Live Effects. Then it’s time to open up some dry stems in Soundation. In addition to mixing and panning, you can now do some creative application of Soundation’s effects. These include:

Filter

Both low-pass and high-pass filters are available, which block high and low frequencies, respectively. Why would you want to do such a thing? There are practical and expressive reasons. The practical one is to keep sounds from fighting each other in the mix. So, for example, my electric guitar has a very bass-heavy sound. If there’s a bassist on the track along with me, together we’re going to sound like mud. By applying a high-pass filter to my guitar, I can stay out of the bassist’s way and still get across most of the information in my sound. Similarly, I’d want to low-pass the bass for the same reason.

A low-pass filter

Continue reading “Teaching expressive use of audio effects in the MOOC”

Teaching mixing in a MOOC

This is the third in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first and second posts.

So, you’ve learned how to listen closely and analytically. The next step is to get your hands on some multitrack stems and do mixes of your own. Participants in PWYM do a “convergent mix” — you’re given a set of separated instrumental and vocal tracks, and you need to mix them so they match the given finished product. PWYM folks work with stems of “Air Traffic Control” by Clara Berry, using our cool in-browser mixing board. The beauty of the browser mixer is that the fader settings get automatically inserted into the URL, so once you’re done, anyone else can hear your mix by opening that URL in their own browser.

Mixing desk Continue reading “Teaching mixing in a MOOC”

Play With Your Music curriculum design – learning to listen

This is the second in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. Read the first post here.

Alex is fond of the phrase “pedagogies of timbre and space.” By that, he means: ways of studying those aspects of recorded music beyond the notes being played and words being sung. Timbre is the combination of overtones, noise content, attack and decay that makes one instrument sound different from another. Space refers to the environment that the sound exists in, real or simulated. These are the aspects of music that get shaped by recording engineers, producers and DJs. Audio creatives usually don’t have much input into the stuff you see on sheet music. But they end up significantly shaping the end result, because the sonic surface is the main thing that most non-specialist listeners pay attention to (along with the beat.) For many pop and dance styles, the surface texture is the most salient component of the music.

Frequency spectrum of an E9 chord on an electric guitar

The work of audio professionals, be they recording artists, engineers, producers, remixers or DJs, consists mostly of close listening. Making recordings consists of doing a lot of asking yourself: Does this sound good? If not, why not? Is there something missing? Or does something need to be taken out? Is the blend of timbres satisfying? Are the sounds placed well in the stereo field? Are they at the right perceptual distance from the listener? No one is born able to ask these questions, much less to answer them. You have to learn how, and then you have to practice. In a sense, music production software is like the Microsoft Office suite. Before you learn about the fine points of formatting or making equations, you need to learn how to write coherently, how to organize data, how to structure a presentation. So it is with music. There’s no point in learning the nuts and bolts of particular software until you know what you’re listening for and what you want to achieve.

Continue reading “Play With Your Music curriculum design – learning to listen”