Friday, 7 March 2014

Amp envelope configurations

Hi I am Chris Farrell from Western Australia. The following lesson is for week 6 of Introduction To Music Production at Coursera.

Today I'll be demonstrating different amplitude envelopes on a synthesiser, so you can see how adjusting the attack time (A), decay time (D), sustain level (S) and release time (R) on a synth can make sounds which have different energy over time. We'll be looking at configuring synth patches that have switch, percussive, damped percussive, sustaining and quirk amplitude envelopes.

I'll be using the Gmedia ImpOSCar synth for today's demonstration, but any synth with a standard ADSR amplitude envelope will be able to create similar types of sounds.


Switch envelope


A switch envelope is used to recreate sounds that are only audible while a key is pressed - they begin when a note on event is received, and end when a note off event is received. Switch envelopes are typically used to create organ-style patches, for example the sound of a hammond organ where as soon as you release the key, the organ sound ends immediately.

I set up an organ-style patch in the imposcar synth with the following amp envelope settings:

A: very low
D: very low
S: 100%
R: very low

With a patch like this, the sound will immediately fall to the sustain level when the key is pressed (note on), and when the key is released (note off) the sound will end straight away. Take a listen to the organ patch:



What did you hear? The organ notes have no decay or release and end abruptly on note off, just like a real organ. If you take a look at the waveform you can also see the lack of decay or release, and the notes start and end very abruptly.


Percussive envelope


A percussive envelope is used to create hit or plucked type sounds, such as drum hits (kick, snare, hi-hat) or pizzicato (plucked) type playing technique on a stringed instrument like guitar or violin. A percussive envelope has sustain and release set to zero, normally a very short attack time, and the decay time is used to set the length of the percussive hit or pluck.

I set up a kind of snare-style patch in the imposcar synth with the following amplitude envelope settings.

A: almost zero
D: around 0.5s
S: 0
R: almost zero

Take a listen to the patch and note the characteristics of the sound:



What did you notice? The sound is very percussive and decays over the specified decay time. With a patch like this, the note length is not affected by the sustain level or release time, and the sound will always be the same length unless the note length is shorter than the decay time. Take a look at the waveform and notice how the hits begin immediately and then decay quite quickly. Waveforms for other percussive style sounds like hi-hats and claps will look very similar.


Damped percussive envelope


A damped percussive envelope is used to recreate sounds of instruments like piano (where the notes are damped by the key dampers) or on guitar where strings can be damped by muting with the left or right hand.

A damped percussive envelope is similar to a percussive envelope except it will also have a release time set quite high - when the key is released the release part of the envelope will kick in, giving the sound a longer tail than strictly percussive sounds.

In the imposcar synth I set up an electric-piano style patch with the following amp envelope settings:

A: almost zero
D: around 0.5s
S: 0
R: around 1.5s

Take a listen to the patch:



How did it sound compared to the percussive patch? The damper percussive sound has a bit more of a natural tail as you can see by looking at the following waveform:


Instead of sounding like a drum hit, the sound has a piano-like characteristic as if the note is being damped by a key damper.

Sustaining (blown or bowed) envelope


To recreate the sound of a string or brass instrument, such as violin or trumpet, we can use a sustaining envelope. In this type of envelope the sound will have an initial attack and decay,  fall to a specified sustain level while the note is held, and then fall to zero when the note is released.

I set up a string-style patch in the imposcar synth with the following settings:

A: around 0.6s
D: around 4s
S: around 33%
R: around 1.5s

Take a listen to the patch and note the characteristics of the sound:



This sounds more like a bowed instrument than the other patches due to the sustaining envelope - while the note is held the sound falls to the sustain level, and when the note is released the sound fades out to zero. The sound also has a natural fade in because of the attack time. In this example the next note starts before the previous envelope has finished, creating a continuous sound as shown in the following waveform representation:


Quirk envelope


The last type of envelope we'll be looking at, is what Loudon calls a quirk envelope. This type of envelope has a very low attack, short decay time, 0 sustain level and very high release time. As the release phase starts on note off, if the note is released before the decay time ends it means the release part of the envelope takes effect. This can be useful to create sounds that have a very short percussive beginning and then long release time, not to model a real-world instrument but to create a different kind of sound.

I set up a patch with a quirk envelope in the imposcar synth, with these settings:

A: very low
D: around 0.6s
S: 0
R: around 4s



As you can hear the sound has an extremely quick pluck at the beginning because of the extremely short note length I used to play back the patch, and a long release tail because of the release setting. Take a look at the waveform and you can kind of see this characteristic.


Conclusion


Today you've learned how using a synths amplitude envelope can be used to create sounds similar to real instruments, because of the nature of the envelopes. We saw how you can create organ style patches with a switch envelope, or percussive style hit or pluck patches with a percussive envelope. We then saw how you could create an electric piano style patch with a damped percussive envelope, a string style patch with a sustaining envelope, and we also looked at the quirk envelope.

I hope you've enjoyed reading and listening today and that having audio clips, waveforms and showing the envelope settings has demonstrated each type of envelope clearly for you. Feel free to leave any comments and let me know if any of my information is incorrect. Thanks for reading!

Sunday, 2 March 2014

Configuring an EQ plugin to function like a mixing console EQ section

Hi I am Chris Farrell from Western Australia. The following lesson is for week 5 of Introduction To Music Production at Coursera.

In today's lesson I'll be configuring an EQ plugin (WaveArts TrackPlug) to function like a mixing console EQ section. I've chosen the Solid State Logic (SSL) 4000 G Series mixing console to base my EQ plugin settings on, which was a very popular mixing console in the 80s.

The SSL console's EQ section


The EQ section on the particular console I'm referencing (with SL611G module) comprises of a four band parametric equaliser plus high and low pass filters. Here are the specifications of each filter in the console's EQ section:

  • Hi pass filter: 0 - 350Hz with 12dB/octave slope
  • LF (low frequency) band: a 12db/octave low shelf filter with a range of 30Hz - 450Hz, and boost/cut control
  • LMF (low-mid frequency) band: frequency adjustable from 0.2 - 2.5Khz, with a Q of 0.5 - 3*, and boost/cut control
  • HMF (high-mid frequency) band: frequency adjustable from 0.6 - 7Khz, with a Q of 0.5 - 3*, and boost/cut control
  • HF (high frequency) band:  a 12db/octave high shelf filter with a range of 1.5kHz - 16kHz, and boost/cut control
  • Low pass filter:  3Khz - +12Khz with 12db/octave slope
* Note this filter's bandwidth (Q) roughly corresponds to a range of +2 octaves (0.5) to about half an octave (3)

Now that we have a general idea of the SSL console's EQ bands and filters we can start to recreate this in a software plugin.

High pass filter


A high pass filter is used to remove frequencies below a sound's fundamental frequency which don't contribute to the sound and tend to muddy up a mix - such as noise, rumbles and vibrations.

In my EQ plugin TrackPlug I'm going to add a new Highpass band with a 12dB/octave slope, and set the cutoff frequency to 20Hz. Later when I'm mixing I'll be able to adjust the cutoff frequency to adjust where the low frequencies will start to be cut from. Note that TrackPlug doesn't allow me to specify the filter slope but it is set at 12db/octave in the plugin. To match the operation of the SSL4000G I would adjust the cutoff between 0 and 350Hz during mixing.


LF band (low shelf)

A low shelf filter can be used to boost or cut the low end - for example you might want to cut lower frequencies from guitar and vocals to make room for bass guitar. Shelving filters are gentle and provide a uniform volume shift beyond the cutoff frequency, unlike a hi pass or low pass filter which have a progressive volume shift.

In TrackPlug I'll add a new Lo Shelf band, set at 30Hz. TrackPlug doesn't allow me to specify the slope which is set at 12dB/octave automatically, but I'll be able to adjust the cutoff frequency and also whether a cut or boost is made (by adjusting the height). To match the operation of the SSL4000G I would adjust the cutoff between 30 and 450Hz during mixing.


LMF band 

This is a mid-range parametric EQ band which can be useful to remove unwanted resonances. It's always better to cut when adjusting in this range to avoid phase problems and distortion.

I'll add a parametric band with a Q (bandwidth) of around 0.5 octaves, and a cutoff frequency around 300Hz. Later I'll be able to adjust the cutoff frequency, height (boost or cut) and width (Q) to target any unwanted resonances. To match the operation of the SSL4000G I would adjust the cutoff between 0.2 and 2.5Khz.


HMF band

Like the LMF band, this is a mid-range parametric filter and can be useful for removing unwanted resonances.

I'll add a parametric band with a Q of around 0.5 octaves, and a cutoff frequency around 2000Hz. Later I'll be able to adjust the cutoff frequency, height (boost or cut) and width (Q) to target any unwanted resonances. To match the operation of the SSL4000G I would adjust the cutoff between 0.6 and 7Khz.


HF band (high shelf)

A high shelf filter can be used to boost or cut the high end to add brightness, particularly used on an element that you want to focus on in the mix. For example you could apply a high shelf boost to a vocal, while applying a high shelf cut to competing instruments, to bring clarity and focus to the vocal.

In TrackPlug I'll add a new Hi Shelf band, set at around 3500Hz. TrackPlug doesn't allow me to specify the slope which is set at 12dB/octave, but I'll be able to adjust the cutoff frequency and also whether a cut or boost is made (by adjusting the height). To match the operation of the SSL4000G I would adjust the cutoff between 1.5 and 16Khz.


Low pass filter

The final filter to be added is a low pass filter, and although probably used more rarely than the others is can be useful to remove all frequencies above a cutoff frequency.

In TrackPlug add a new Lopass band, and set the cutoff frequency to 20kHz, and I've bypassed this band so it's not active. Later when I'm mixing I'll be able to adjust the cutoff frequency and also the amount of cut or boost. Note that TrackPlug doesn't allow me to specify the filter slope but it is set at 12db/octave in the plugin.


Saving a preset with these settings

In my DAW, Presonus Studio One, I can click on the Preset menu item above the plugin GUI and choose "Save Preset As" to save a new preset. I've called it "Mixing EQ - SSL4000G" and I'll be able to load this preset whenever I want to start mixing with EQ bands similar to the famed SL 4000 series:


Putting it all together

To put this preset into practice I've loaded a vocal recording into my DAW and changed the EQ settings in the following way:


  • Hi pass filter: changed cutoff to 46Hz with -24dB cut to remove noise and rumbles
  • LF band (low shelf): changed cutoff to 345Hz with a slight cut to give other instruments (guitar, bass, kick) slightly more room
  • LMF: bypassed
  • HMF: bypassed
  • HF (high shelf):  slight boost at 1.5Khz to make vocal sound brighter
  • Low pass filter: bypassed
As you can see the lower frequencies are cut from the vocal while I've boosted the top end to make it brighter - this should allow the vocal to sit better with competing instruments like guitar and keys.

Conclusion


Today you've seen how to transfer the EQ section capabilities of a large format mixing console such as the SSL 4000 G series to a software EQ plugin - and save the settings as a preset to use as a starting point for general tracking EQ.

I hope you've enjoyed reading this and have learnt something, and feel free to let me know if you saw any errors. Note that I don't own an SSL4000 G series but I researched mixing consoles and found the user manual for the SSL 4000G and thought it would be good to base my assignment on this console. Apparently they claim that the 4000 series has had more platinum selling albums mixed on it than all other consoles combined.





Friday, 21 February 2014

An effective use of a compressor in a musical context

Hi I am Chris Farrell from Western Australia. The following lesson is for week 4 of Introduction To Music Production at Coursera.

In today's lesson we'll be looking at using a compressor effectively in a musical context - we'll be applying compression to a bass guitar track to even out the overall level, making it sound louder and more even.

What is a compressor?


The simplest way to describe a compressor is that it is an automatic volume control, and in music production we can use a compressor to even out the level on a track by reducing it's dynamic range. This means the quiet parts become louder, the loud parts become quieter, and the overall level becomes more even. It's good to leave some dynamic range when producing music, but evening out the overall level can help an instrument performance to sound more professional and polished. In some forms of music like dance and electronic, excessive compression is also used as an effect but today we'll be looking at compression as a correctional tool.


Using a compressor to even out a bass track


In my DAW I recorded a simple bass guitar performance which you can listen to here:



The first thing you'll notice after listening to the recording is that the bass guitar note levels are quite erratic and uneven. Taking a look at the waveform display you can see that the note levels are uneven, as the notes are all at different levels. We want the bass guitar to be even to provide a nice, rounded low end for the track so we need to use a compressor.




In a scenario like this we can use a compressor as an insert effect to make those loud notes quieter, which will effectively also make the quiet notes louder when the output gain is increased to compensate, to make the bass guitar sound more even. I've decided to use T-Racks OptoCompressor but any compressor should be able to achieve a similar result.


I've set the compressor to the following settings:

  • Ratio: 11:1 - this is actually a high ratio and is affectively acting as a limiter. If the bass player (me!) was more even in their playing you wouldn't need such a high ratio , and I also wanted to use a high ratio to demonstrate the resulting compression
  • Attack: 0.40ms
  • Release: 118ms
  • Compression amount: 8.2 (this acts like the Threshold control on other compressors)
  • Output gain: +11.3dB
I've set a short attack as I wanted the transients to be compressed too, and a longer release to make the compression sound more natural. I could have used a longer attack time to make the bass more punchy - as the transients wouldn't have been compressed as much - but for the purpose of this lesson I used a short attack time to make the bass sound more even and full compared to the punchy drums. The output gain was increased to bring the overall level back up due to the gain reduction applied.

Take a listen to the following audio clip: in the first two bars the bass is uncompressed, and in the last two bars compression has been applied to the bass guitar. Listen carefully to the bass guitar notes and see if you can hear the difference:



What did you notice? In the second half of the clip with compression applied to the bass, the notes sound much more even and the overall level seems to be louder. The bass guitar now sounds more even and less erratic, and the bottom end is generally fuller. Take a look at the waveform of the compressed bass clip and see if you can notice how it has changed:


You can see the loud notes are now quieter, the quieter notes are now louder, and overall the note levels are all much closer which explains what we heard when listening to the compressed bass.

Conclusion


You should now see how using a compressor can be used to even out the levels of notes in a performance - in this case evening out bass guitar notes so the low end of the track is more even and full. The compressed bass definitely sounded more professional, well except for my lack of timing in the performance! I could have adjusted the attack of the compressor to let some of the bass guitar transients to come through more, which would have made the sound a bit more punchy, but as the drums were already quite punchy I wanted to make the bass sound more rounded and even.

Thank you for taking the time to read this post, and please leave a comment if any of the information I've provided is incorrect or if you have any tips from your experience using compression to correct a musical performance. You may find, like me, that your ears are still getting used to hearing the sound of compression so play the audio clips a few times until you're happy you can hear the difference.

Saturday, 15 February 2014

Submixes

Hi I am Chris Farrell from Western Australia. The following lesson is for week 3 of Introduction To Music Production at Coursera.

Today's lesson is about the concept of submixes and how they can be used to organise a mix and to make mixing easier. Some mixes will have a large number of tracks, and using submixes allows related tracks to be routed so that they can be controlled with one fader in the mixer.

Mixing without submixes

Take a look at this diagram which shows the signal flow if I was recording a simple band - I have a number of drum instruments on individual tracks, as well as a bass guitar, acoustic guitar and a vocal. The output of all these tracks is routed to the master output of my DAW.



During mixing, I've adjusted the volume fader of every single track until the balance is just right to my ears. But then what happens if I decide that the drums are too loud?

Because all the tracks are routed to the master out, it means I'd have to go through and lower the volume faders of each of the drum tracks to make the drums quieter. Not only is this time consuming, but there's also the risk I could ruin the balance of my drum kit if my DAW doesn't allow me to group faders. 

A better way would be to allow my drums to be controlled by one volume fader, while retaining the relative volumes of each drum track. This can be achieved in a DAW by creating a submix.

Submixes

Creating a drums submix can be achieved by creating a new bus track in a DAW and then routing the output of each drum track channel to the bus, which I would label as "Drums submix" or "Drum bus". The bus would be routed to the master output like the other tracks.



Now when I'm mixing I'd only need to control volume faders of four tracks - the drum submix, bass guitar, acoustic guitar and vocals - without affecting the relative volumes of my individual drum sounds. The whole drum kit's level can be controlled with one fader, saving me time while mixing, but it also gives me another advantage as I can now apply effects to the whole drum submix. For example I might want to apply gentle compression to the whole drum kit to glue the kit together, and I could simply insert a compressor plugin onto the drum bus to achieve this. This would be impossible without a submix.

I could organise my mix even further by creating another submix for the individual tom sounds by routing them to a new bus, which would be routed to the drum bus. I could also create a submix for cymbals and hats by routing them to another bus just for these types of drum hits. Now within my drums, I can control the kick, snare, cymbals/hats and toms with four faders, giving me greater flexibility. Meanwhile the overall drum kit level could still be controlled by adjusting the drum bus fader.

Other uses of submixes

Submixes can be used in any situation where you want to group related tracks so that they can be controlled with a single volume fader while mixing. Some examples are:

  • In an orchestral mix, the producer could create a string submix, horn submix and woodwind submix to allow each section to be controlled with a single fader
  • For recording a rock band with multiple guitar parts, a producer could route lead and rhythm guitars to a guitar submix, so the guitars level can be controlled with one fader and to give the guitars the same effects treatment
  • If a producer had recorded a number of background vocalists on individual tracks, these could be routed to a background vocal submix to be controlled with a single fader in a mix.

Conclusion

You should now see the major benefits of using submixes in your mix, especially in complex mixes with many tracks. Being able to route similar tracks to a submix allows these to be controlled with a single fader which not only makes mixing more efficient but allows related tracks to receive the same effect treatment, saving your CPU and providing flexibility. I'd encourage you to open your DAW and investigate how to create a bus channel, route other channels/tracks to the bus, and then adjust the volume fader of the bus channel. You could also experiment with inserting an insert effect onto the bus to effect all channels routed to it.

Thank you for taking the time to read this lesson, and feel free to leave a comment if any of my lesson is inaccurate, or if you've got any tips for using submixes when mixing. 

Chris










Monday, 10 February 2014

Recording MIDI in Presonus Studio One

Hi I am Chris Farrell from Western Australia. The following lesson is for week 2 of Introduction To Music Production at Coursera.

Today I will be teaching you how to record MIDI (Musical Instrument Digital Interface) data into a DAW (Digital Audio Workstation). I'll be setting up and recording MIDI into Presonus Studio One, but the steps I am presenting would be similar in most DAWs.

Enough acronyms already, let's jump straight into the lesson!

Connecting a MIDI keyboard to the computer

Before you begin recording MIDI it's worth noting the devices that you'll need to follow this tutorial:
  • An external MIDI keyboard
  • A MIDI in port - your audio interface may have a MIDI in and out port, or you could have a dedicated USB device for this
  • A MIDI cable, connected from the MIDI out of the keyboard to the MIDI in port

Setting up the DAW

Next you will need to set up your DAW to be able to record incoming MIDI from your midi keyboard (input device), on one or more MIDI channels. In Studio One this can be achieved by following these steps: 
  1. On the Start page, click on "Configure External Devices..."
     
  2. Click the "Add" button

  3. Studio One comes with a number of MIDI device presets, so if your device is available you can select it from the list.

  4. Next choose the MIDI channel(s) and MIDI port that you want to receive MIDI data through.

    I've set up Studio One to receive MIDI data through MIDI channel 1 on "Fireface Midi Port 1", which is the first MIDI in port on my audio interface. I've also ticked the box next to "Default Instrument Input" so that whenever I create an instrument track it will automatically be set up to use my MIDI keyboard as the input device.
  5. To test if Studio One is successfully receiving MIDI data from the MIDI keyboard, Studio One has a MIDI monitor located in the transport bar. You can press keys on your MIDI keyboard and see if the DAW is receiving the input.



    The triangle to the left of the MIDI logo will turn orange when MIDI input is being received.

Creating an instrument track

I've now set up a MIDI keyboard, and Studio One will receive input data from the device through MIDI channel 1 of MIDI port 1 - but currently the 0 and 1s are hitting the DAW and going nowhere fast. We need to give them a voice!
  1. Right click on the empty track area and choose "Add Instrument Track"

  2. The track will already be set up to receive MIDI from my keyboard because of the setup I did earlier in the tutorial, and when I hit keys on the keyboard a yellow meter shows the track is receiving note on and off events

  3. To add an instrument track, choose an instrument plugin from the "Instruments" tab of the browser and drag it onto the instrument track. The instrument GUI (graphical user interface) will open, but you can close it for now.

      

  4. It's good practice to name your tracks, so I've renamed this track to reflect the kind of sound the instrument will be generating.

Prepare for recording

Next I am going to prepare the track for recording by following these steps

  1. Arm the track for recording - as you can see in the previous picture the track is already armed
  2. Enable the metronome and count in. In Studio One this is achieved by clicking on the metronome icon in the transport bar , and then clicking on the metronome setting icon. I've set the recording to precount 2 bars, so that recording doesn't start immediately.

  3. Set the recording position in the timeline - in Studio One you can click in the timeline to set the recording/playback position, which I have set to begin at the start of the second bar so you can clearly see where the recording will begin.

  4. Start recording. In Studio One I can start recording by clicking * on the numeric keypad, or by clicking record in the transport bar.

Quantizing a MIDI performance

I can double click on the MIDI clip that I recorded and it will open in the piano roll.


The recording wasn't too bad but there are definitely some timing issues, particularly the note in the middle which is quite late and sounds way off to my ears. Timing issues can be fixed by quantizing midi to a grid, so in Studio One I select the midi notes I want to quantize and then click the Quantize icon in the toolbar - the large letter "Q". 


Quantize settings

In the Quantize settings there quite a few settings I can adjust

  • Quantize grid note length: as my performance was mostly eighth notes with a couple of 16ths, I've set the grid to 16th notes. If I quantized to an 8th note grid, my 16th notes would become 8th notes.
  • What data will be affected by the quantize? In Studio One you can apply Quantizing to note start position, note end position and velocity. You can also apply this as a percentage so the notes aren't all quantized right to the grid which would sound quite robotic and have no feel. Applying the quantize as a percentage will leave some feel and groove in the performance.
  • Apply : in Studio One click the "Apply" button to quantize the selected notes with the chosen settings.

In this example, I started with a 20% quantize on the note start position, which I applied and then listened to the result. I ended up applying 70% quantize to get the feel right - yes, my timing was that bad!

Studio one tip: for a quick quantize you can also hit ALT + Q on your keyboard to perform a 50% quantize.

Conclusion

That concludes my lesson for this week - you've seen how to set up a DAW like Studio One to receive input from a MIDI device like a MIDI keyboard, how to record MIDI to an instrument track, and how to quantize MIDI data. I hope this has helped you to understand how MIDI data is so different from audio data, where it is very easy to adjust note timing after recording. You could also adjust note lengths, the notes themselves (transposing), note velocity and record MIDI CC (control change) messages but that is beyond the scope of this tutorial. Your DAW may also have much more advanced quantizing features than I covered here, so I encourage you to look in your DAW's user guide.  

I hope you enjoyed reading this as much as I enjoyed making it, and feel free to let me know if any of my information is incorrect or could be improved.

Until next time...
Chris


Saturday, 1 February 2014

Microphone directionality and polar patterns

Hi I am Chris Farrell from Western Australia. The following lesson is for week 1 of Introduction To Music Production at Coursera.

Today I will be teaching you about microphone directionality and polar patterns and how they affect what sound is recorded from different directions. A polar pattern is essentially a diagram showing a microphone's sensitivity in picking up sound from different directions, and some of the more common polar patterns include:

  • cardioid (and variations including super or hyper cardioid): with this pattern, the microphone is most sensitive to sound coming from in front of the microphone
  • bidirectional (also called figure 8): with this pattern the microphone is sensitive to sound coming from in front of and behind the microphone, but it will reject sound coming from the sides
  • omnidirectional: with this pattern the microphone will pick up sound from all directions equally.

I have created three recordings using a budget condenser microphone, where during each recording I changed the polar pattern setting on the microphone. You will be able to hear how omnidirectional, figure 8 and super cardioid settings on my microphone affect how the sound is recorded from different directions.

Take a look at the diagram below to see how I set up the microphone in relation to a number of sound sources:
  • I stood directly in front of the microphone to record my voice
  • Directly behind the microphone I placed a metronome
  • On the other side of the room away from the microphone I placed a radio playing some music

Omnidirectional pattern


For the first recording I set the microphone to use the omnidirectional pattern. Omnidirectional means from all directions, so recording with this pattern should pick up a lot of ambient sound from around the room.



Take a listen and make a mental note of what you hear. 

Hint: Turn up your volume level so that you can just hear the radio music playing in the background of this recording. This will help you as you analyse all the recordings.



In the recording, you can clearly hear music playing on the radio, the metronome, and my voice doesn't sound as focussed as it should. The recording also sounds noisy as it is picking up other noises such as my computer's fan. This is all because using an omnidirectional pattern the microphone is picking up sounds from all directions. For a voice recording it would be better if the microphone was more focussed on the direction of my voice.

Figure of 8 (bidirectional) pattern


For the next recording I set the microphone to use a Figure 8, or bidirectional, pattern. Recording with this pattern, the microphone should pick up the sound directly in front and behind the microphone.



Listen to the figure 8 recording and hear how it is different from the first recording.



In the recording with the Figure 8 pattern, the level of the ambient sound and noise is considerably lower, and the metronome and my voice can be heard more clearly. This is because with a Figure 8 pattern the microphone is more sensitive to picking up sounds directly in front of and behind the microphone, and rejecting sound from the sides. This voice recording sounds much better, however it could still be improved!

Supercardioid pattern


For the final recording I set the microphone to use a super cardioid pattern. Cardioid and super cardioid patterns are unidirectional, meaning they will be focussed on picking up sound from directly in front of the microphone. Super cardioid has a narrower pickup to the front of the microphone compared to a cardioid pattern, but will also pick up a little from behind.



Listen to the final recording and hear the difference from the previous recording.



In this final recording, the ambient sound is a little quieter, the level of the metronome has dropped, and the sound of my voice is more focussed. This is because the microphone is rejecting more sound from the sides and behind the microphone, and picking up sound from directly in front of the microphone better.

With this budget microphone set to a super cardioid pattern, I have managed to isolate the sound of my voice to a reasonable extent without using an isolation booth. You can hear that by using a cardioid or super cardioid pattern it is much easier to isolate sound sources based on their direction – and this applies to recording any sound source in the studio such as individual drum sounds, instruments and vocals. Recording with an omnidirectional pattern is useful to capture ambient sounds such as a busy street, or to capture various sound sources from multiple directions at the same time. A Figure 8 pattern is useful for recording two different sound sources at once, such as two vocalists or instruments - by picking up sound in front of and behind the microphone and rejecting sound to the sides.

Thanks for listening!

I'd love to hear your feedback of my recordings and descriptions, and please let me know if I could have explained anything better or if any of this information is incorrect. I've never recorded a band before, so also feel free to share your tips about using different polar patterns while recording drums, instruments and vocalists.

I haven't run any effects over the audio recordings as I wanted you to be able to hear how the omnidirectional recording has much more ambient noise than the others. In a studio recording situation you would normally try to isolate a vocalist better, and you could also look at applying noise reduction in an audio editor to clear up low level noise. As it is a budget microphone recorded through a budget preamp it would be noisier than more expensive microphones, and I made no attempt to dampen other background noises such as my computer. Oh, and I should have used a pop filter!