EQ (equalization) plugins are volume controls for specific parts of the frequency spectrum. Every DAW, mixing board and guitar amp has EQ controls, and they can radically transform your sounds. But while EQ is an essential part of audio engineering, it is also a source of confusion for beginners. In this post, I lay out some key vocabulary.
To understand how EQ works, you first need to learn what a frequency is. If you are a musician, this will be easy, because frequency is just another name for pitch. All sounds are produced by vibrating objects. Pitched sounds are produced by regular, steady vibrations. The faster an object vibrates, the higher the pitch it produces. Frequency is a measure of how many cycles of vibration there are per second: how many times the speaker cone goes in and out, how many times the guitar string flexes back and forth, how many times your vocal cords flap, how many times the air pushes harder or more gently against your eardrum.
This animation shows a stylized sound source agitating the air, producing regular pressure waves. You can determine the frequency of this sound by counting how many times the object pulses per second.
Frequency is measured in hertz (Hz), named after German physicist Heinrich Hertz. A frequency of 1 Hz means “one cycle of vibration per second.” When we say that the standard tuning pitch is A at 440 Hz, we mean that the tuning fork or speaker cone or guitar string is vibrating back and forth 440 times per second. If you play middle C on a guitar, the string will vibrate at a frequency of 261.626 Hz. The guitar’s low open E string vibrates at 82.41 Hz, and its high open E string vibrates at 329.63 Hz. I find the numbers don’t support my intuition well; it has been helpful for me to see “100 Hz” and think, “ah, so, a little higher than the G on the low E string on the guitar.”
Humans can hear frequencies between 20 Hz and 20,000 Hz. If something is vibrating slower than 20 times per second, you will hear individual thumps or clicks rather than a continuous tone. This simple web interactive demonstrates how clicks fuse into a tone at around 20 Hz. When you are hearing higher-pitched tones, it just means that the clicks are coming faster!
If something is vibrating faster than 20,000 Hz, humans can’t hear it at all. Many animals can, though; this is how dog whistles work. Note that 20,000 Hz is the optimal limit for human hearing. As you get older, the high end of your hearing range will fall off dramatically. You can lose your high-frequency hearing faster by going to rock concerts without wearing earplugs or otherwise failing to protect yourself from excessive decibel levels. Take care of your ears! Once your hearing is gone, it’s gone forever.
Here is where it gets more complicated. Natural sounds are a blend of many frequencies at once. The only way to hear a single pure frequency is with a synthesizer. (When you whistle, you produce a pretty pure sine tone, actually, but there’s some noise in there with it.) Whatever environment you are currently in, you are hearing many different frequencies jumbled together. It’s a minor miracle of human physiology that you can effortlessly pick particular frequencies out of hugely complex sound masses.
You might be surprised to learn that even a single note on a piano or guitar is a blend of several different frequencies. When we say that a guitar string is tuned to a frequency of 100 Hz, we really mean that its lowest-pitched vibration has a frequency of 100 Hz. Subsections of the guitar string also vibrate independently at frequencies of their own. These vibrations of string subsections are called harmonics. This animation shows the first six harmonics of a string:
The rich timbre of the guitar is produced by its harmonics interacting with each other and evolving over time. Each harmonic has a frequency that is a whole number multiple of the fundamental frequency. If the guitar string’s fundamental frequency is 100 Hz, then its other harmonics will have frequencies of 200 Hz, 300 Hz, 400 Hz, 500 Hz, and so on every 100 Hz up to infinity (in theory; in practice, harmonics get quieter as they get higher and they eventually become inaudible.)
If a single note on the guitar is a blend of dozens of different frequencies, imagine how many frequencies are present in a chord. Now imagine how many there are if the guitar chord is accompanied by vocals, bass, drums, keyboards, and so on. How do you keep track of all this information? You can visualize the frequencies using a computer program called a spectrogram. There is a good web-based spectrogram built into Chrome Music Lab. Click the microphone icon and make some sounds.
In the Chrome Music Lab spectrogram, time goes from left to right, and frequencies go from bottom to top. Loudness is represented both by the “height” of the bumps and by their color.
For a more serious and professional spectrogram, try Voxengo Span, which is free and excellent.
Say “shhhh” or “hhhhhhh” into the spectrogram to see what it looks like. You are creating noise, which has a mathematical definition: it’s a blend of all the frequencies at once. Pure white noise is an equal blend of all the audible frequencies – think of TV static or a big waterfall. Now try singing a note. Your voice’s harmonics will appear as an orderly series of peaks. Sing different vowels and watch what happens to the harmonics. If you sing the word “wah” repeatedly, you will immediately see how a wah-wah pedal works – it cuts or boosts the higher harmonics.
If you play middle C on the piano and on the guitar, they will sound different. They are playing the same fundamental frequency accompanied by the same harmonics in the same order, so how can this be? The answer is that each harmonic will be louder or quieter in one instrument than in the other, and the harmonics will also decay and evolve differently over time. The image below shows the frequency spectra of four different instruments playing middle C:
Now that you know how to visualize the frequency spectrum, you are ready to understand EQ: it’s a device or software program that can cut or boost specific ranges of frequencies. An EQ might be as simple as the bass and treble knobs on a stereo, or it might give you individual control of dozens of frequency bands. When you were singing “ooh” and “ahh”, you were using your vocal tract to EQ your voice.
EQ has two sets of purposes, the practical ones and the creative ones. The practical uses are easy to understand: sometimes there are frequencies that you want to get rid of. For example, say you are recording vocals. If the lowest note you can sing has a fundamental frequency of 100 Hz, then you can be sure that any frequencies below that will be unwanted: thumps from someone hitting the mic stand, electrical hum, and so on. You can EQ out everything below 100 Hz to keep that stuff out. (In fact, microphones sometimes have a built-in EQ control for that purpose.) You can also use EQ to filter out things like feedback and quantization noise.
Audio engineers use EQ to help different sounds sit well together in a mix. Michael Jackson’s “Beat It” includes both a drum machine and an acoustic drum kit played by Jeff Porcaro. These are timbrally similar instruments playing very similar parts, and the resulting sound could easily be cluttered and unwieldy. Producer Quincy Jones and engineer Bruce Swedien used EQ to cut the high frequencies out of the drum machine and the low frequencies out of the kit. When you hear them in isolation, the drum machine sounds muffled, like it has a pillow over it, and the kit sounds thin and tinny, like it’s playing on a tiny speaker. When you hear them together, however, they fit together like two puzzle pieces. This is an extreme example; it’s more typical to use EQ for subtle cuts and boosts. For example, if the singer has a nice tone around 1500-1600 Hz, then you might gently cut that frequency range from the instruments to create more space. Mix engineers spend hours making EQ adjustments like this.
You can use EQ more aggressively to transform the sound of an instrument. If you cut all the highest and lowest frequencies out of your voice, then you sound like you’re on a megaphone or an old-time radio. You can EQ a snare drum to sound like a pistol shot or a cardboard box; a hi-hat to sound like a burst of static or a whisper; a guitar like a harp or a synthesizer. Pop, dance and hip-hop producers devote a large percentage of their studio time to this kind of sound shaping, combining EQ with compression, reverb, and many other effects.
So how do you learn to use EQ? It’s a bit of a dark art. It would be nice if there was some simple set of rules or guidelines, but none exist. The right EQ settings will depend on the audio you are putting in and what kind of sound you are hoping to get out. The style of the music matters, and so do your subjective tastes. Simple trial and error is ultimately the best learning method. That said, you don’t have to go in completely blind. Every EQ plugin comes with lots of presets. Those are good starting points. Say you are trying to find a good sound for an acoustic guitar. Ableton Live’s EQ 8 plugin has four presets for you to try: Acoustic Git EQ 1, Acoustic Git EQ 2, Bright Acoustic Guitar, and Clear Acoustic Guitar. Click through each one and see how you like it. Try fiddling with the parameters. Bright Acoustic Guitar gives a huge boost to the entire high end of the spectrum; maybe you don’t want your guitar to be that bright. If you come up with a good setting, save it! You will be on your way.
Very helpful. Everything else I’ve looked at re: EQ was confusing. Thanks.
Hi Ethan, brilliant, crisp and colorful (meaning: good examples and images) explanation. I especially enjoy the first sentence: “EQ (equalization) plugins are volume controls for specific parts of the frequency spectrum.” That sums it up very nicely. Thanks a lot! Greetings from Germany, Martin