Friday, 21 February 2014

An effective use of a compressor in a musical context

Hi I am Chris Farrell from Western Australia. The following lesson is for week 4 of Introduction To Music Production at Coursera.

In today's lesson we'll be looking at using a compressor effectively in a musical context - we'll be applying compression to a bass guitar track to even out the overall level, making it sound louder and more even.

What is a compressor?


The simplest way to describe a compressor is that it is an automatic volume control, and in music production we can use a compressor to even out the level on a track by reducing it's dynamic range. This means the quiet parts become louder, the loud parts become quieter, and the overall level becomes more even. It's good to leave some dynamic range when producing music, but evening out the overall level can help an instrument performance to sound more professional and polished. In some forms of music like dance and electronic, excessive compression is also used as an effect but today we'll be looking at compression as a correctional tool.


Using a compressor to even out a bass track


In my DAW I recorded a simple bass guitar performance which you can listen to here:



The first thing you'll notice after listening to the recording is that the bass guitar note levels are quite erratic and uneven. Taking a look at the waveform display you can see that the note levels are uneven, as the notes are all at different levels. We want the bass guitar to be even to provide a nice, rounded low end for the track so we need to use a compressor.




In a scenario like this we can use a compressor as an insert effect to make those loud notes quieter, which will effectively also make the quiet notes louder when the output gain is increased to compensate, to make the bass guitar sound more even. I've decided to use T-Racks OptoCompressor but any compressor should be able to achieve a similar result.


I've set the compressor to the following settings:

  • Ratio: 11:1 - this is actually a high ratio and is affectively acting as a limiter. If the bass player (me!) was more even in their playing you wouldn't need such a high ratio , and I also wanted to use a high ratio to demonstrate the resulting compression
  • Attack: 0.40ms
  • Release: 118ms
  • Compression amount: 8.2 (this acts like the Threshold control on other compressors)
  • Output gain: +11.3dB
I've set a short attack as I wanted the transients to be compressed too, and a longer release to make the compression sound more natural. I could have used a longer attack time to make the bass more punchy - as the transients wouldn't have been compressed as much - but for the purpose of this lesson I used a short attack time to make the bass sound more even and full compared to the punchy drums. The output gain was increased to bring the overall level back up due to the gain reduction applied.

Take a listen to the following audio clip: in the first two bars the bass is uncompressed, and in the last two bars compression has been applied to the bass guitar. Listen carefully to the bass guitar notes and see if you can hear the difference:



What did you notice? In the second half of the clip with compression applied to the bass, the notes sound much more even and the overall level seems to be louder. The bass guitar now sounds more even and less erratic, and the bottom end is generally fuller. Take a look at the waveform of the compressed bass clip and see if you can notice how it has changed:


You can see the loud notes are now quieter, the quieter notes are now louder, and overall the note levels are all much closer which explains what we heard when listening to the compressed bass.

Conclusion


You should now see how using a compressor can be used to even out the levels of notes in a performance - in this case evening out bass guitar notes so the low end of the track is more even and full. The compressed bass definitely sounded more professional, well except for my lack of timing in the performance! I could have adjusted the attack of the compressor to let some of the bass guitar transients to come through more, which would have made the sound a bit more punchy, but as the drums were already quite punchy I wanted to make the bass sound more rounded and even.

Thank you for taking the time to read this post, and please leave a comment if any of the information I've provided is incorrect or if you have any tips from your experience using compression to correct a musical performance. You may find, like me, that your ears are still getting used to hearing the sound of compression so play the audio clips a few times until you're happy you can hear the difference.

Saturday, 15 February 2014

Submixes

Hi I am Chris Farrell from Western Australia. The following lesson is for week 3 of Introduction To Music Production at Coursera.

Today's lesson is about the concept of submixes and how they can be used to organise a mix and to make mixing easier. Some mixes will have a large number of tracks, and using submixes allows related tracks to be routed so that they can be controlled with one fader in the mixer.

Mixing without submixes

Take a look at this diagram which shows the signal flow if I was recording a simple band - I have a number of drum instruments on individual tracks, as well as a bass guitar, acoustic guitar and a vocal. The output of all these tracks is routed to the master output of my DAW.



During mixing, I've adjusted the volume fader of every single track until the balance is just right to my ears. But then what happens if I decide that the drums are too loud?

Because all the tracks are routed to the master out, it means I'd have to go through and lower the volume faders of each of the drum tracks to make the drums quieter. Not only is this time consuming, but there's also the risk I could ruin the balance of my drum kit if my DAW doesn't allow me to group faders. 

A better way would be to allow my drums to be controlled by one volume fader, while retaining the relative volumes of each drum track. This can be achieved in a DAW by creating a submix.

Submixes

Creating a drums submix can be achieved by creating a new bus track in a DAW and then routing the output of each drum track channel to the bus, which I would label as "Drums submix" or "Drum bus". The bus would be routed to the master output like the other tracks.



Now when I'm mixing I'd only need to control volume faders of four tracks - the drum submix, bass guitar, acoustic guitar and vocals - without affecting the relative volumes of my individual drum sounds. The whole drum kit's level can be controlled with one fader, saving me time while mixing, but it also gives me another advantage as I can now apply effects to the whole drum submix. For example I might want to apply gentle compression to the whole drum kit to glue the kit together, and I could simply insert a compressor plugin onto the drum bus to achieve this. This would be impossible without a submix.

I could organise my mix even further by creating another submix for the individual tom sounds by routing them to a new bus, which would be routed to the drum bus. I could also create a submix for cymbals and hats by routing them to another bus just for these types of drum hits. Now within my drums, I can control the kick, snare, cymbals/hats and toms with four faders, giving me greater flexibility. Meanwhile the overall drum kit level could still be controlled by adjusting the drum bus fader.

Other uses of submixes

Submixes can be used in any situation where you want to group related tracks so that they can be controlled with a single volume fader while mixing. Some examples are:

  • In an orchestral mix, the producer could create a string submix, horn submix and woodwind submix to allow each section to be controlled with a single fader
  • For recording a rock band with multiple guitar parts, a producer could route lead and rhythm guitars to a guitar submix, so the guitars level can be controlled with one fader and to give the guitars the same effects treatment
  • If a producer had recorded a number of background vocalists on individual tracks, these could be routed to a background vocal submix to be controlled with a single fader in a mix.

Conclusion

You should now see the major benefits of using submixes in your mix, especially in complex mixes with many tracks. Being able to route similar tracks to a submix allows these to be controlled with a single fader which not only makes mixing more efficient but allows related tracks to receive the same effect treatment, saving your CPU and providing flexibility. I'd encourage you to open your DAW and investigate how to create a bus channel, route other channels/tracks to the bus, and then adjust the volume fader of the bus channel. You could also experiment with inserting an insert effect onto the bus to effect all channels routed to it.

Thank you for taking the time to read this lesson, and feel free to leave a comment if any of my lesson is inaccurate, or if you've got any tips for using submixes when mixing. 

Chris










Monday, 10 February 2014

Recording MIDI in Presonus Studio One

Hi I am Chris Farrell from Western Australia. The following lesson is for week 2 of Introduction To Music Production at Coursera.

Today I will be teaching you how to record MIDI (Musical Instrument Digital Interface) data into a DAW (Digital Audio Workstation). I'll be setting up and recording MIDI into Presonus Studio One, but the steps I am presenting would be similar in most DAWs.

Enough acronyms already, let's jump straight into the lesson!

Connecting a MIDI keyboard to the computer

Before you begin recording MIDI it's worth noting the devices that you'll need to follow this tutorial:
  • An external MIDI keyboard
  • A MIDI in port - your audio interface may have a MIDI in and out port, or you could have a dedicated USB device for this
  • A MIDI cable, connected from the MIDI out of the keyboard to the MIDI in port

Setting up the DAW

Next you will need to set up your DAW to be able to record incoming MIDI from your midi keyboard (input device), on one or more MIDI channels. In Studio One this can be achieved by following these steps: 
  1. On the Start page, click on "Configure External Devices..."
     
  2. Click the "Add" button

  3. Studio One comes with a number of MIDI device presets, so if your device is available you can select it from the list.

  4. Next choose the MIDI channel(s) and MIDI port that you want to receive MIDI data through.

    I've set up Studio One to receive MIDI data through MIDI channel 1 on "Fireface Midi Port 1", which is the first MIDI in port on my audio interface. I've also ticked the box next to "Default Instrument Input" so that whenever I create an instrument track it will automatically be set up to use my MIDI keyboard as the input device.
  5. To test if Studio One is successfully receiving MIDI data from the MIDI keyboard, Studio One has a MIDI monitor located in the transport bar. You can press keys on your MIDI keyboard and see if the DAW is receiving the input.



    The triangle to the left of the MIDI logo will turn orange when MIDI input is being received.

Creating an instrument track

I've now set up a MIDI keyboard, and Studio One will receive input data from the device through MIDI channel 1 of MIDI port 1 - but currently the 0 and 1s are hitting the DAW and going nowhere fast. We need to give them a voice!
  1. Right click on the empty track area and choose "Add Instrument Track"

  2. The track will already be set up to receive MIDI from my keyboard because of the setup I did earlier in the tutorial, and when I hit keys on the keyboard a yellow meter shows the track is receiving note on and off events

  3. To add an instrument track, choose an instrument plugin from the "Instruments" tab of the browser and drag it onto the instrument track. The instrument GUI (graphical user interface) will open, but you can close it for now.

      

  4. It's good practice to name your tracks, so I've renamed this track to reflect the kind of sound the instrument will be generating.

Prepare for recording

Next I am going to prepare the track for recording by following these steps

  1. Arm the track for recording - as you can see in the previous picture the track is already armed
  2. Enable the metronome and count in. In Studio One this is achieved by clicking on the metronome icon in the transport bar , and then clicking on the metronome setting icon. I've set the recording to precount 2 bars, so that recording doesn't start immediately.

  3. Set the recording position in the timeline - in Studio One you can click in the timeline to set the recording/playback position, which I have set to begin at the start of the second bar so you can clearly see where the recording will begin.

  4. Start recording. In Studio One I can start recording by clicking * on the numeric keypad, or by clicking record in the transport bar.

Quantizing a MIDI performance

I can double click on the MIDI clip that I recorded and it will open in the piano roll.


The recording wasn't too bad but there are definitely some timing issues, particularly the note in the middle which is quite late and sounds way off to my ears. Timing issues can be fixed by quantizing midi to a grid, so in Studio One I select the midi notes I want to quantize and then click the Quantize icon in the toolbar - the large letter "Q". 


Quantize settings

In the Quantize settings there quite a few settings I can adjust

  • Quantize grid note length: as my performance was mostly eighth notes with a couple of 16ths, I've set the grid to 16th notes. If I quantized to an 8th note grid, my 16th notes would become 8th notes.
  • What data will be affected by the quantize? In Studio One you can apply Quantizing to note start position, note end position and velocity. You can also apply this as a percentage so the notes aren't all quantized right to the grid which would sound quite robotic and have no feel. Applying the quantize as a percentage will leave some feel and groove in the performance.
  • Apply : in Studio One click the "Apply" button to quantize the selected notes with the chosen settings.

In this example, I started with a 20% quantize on the note start position, which I applied and then listened to the result. I ended up applying 70% quantize to get the feel right - yes, my timing was that bad!

Studio one tip: for a quick quantize you can also hit ALT + Q on your keyboard to perform a 50% quantize.

Conclusion

That concludes my lesson for this week - you've seen how to set up a DAW like Studio One to receive input from a MIDI device like a MIDI keyboard, how to record MIDI to an instrument track, and how to quantize MIDI data. I hope this has helped you to understand how MIDI data is so different from audio data, where it is very easy to adjust note timing after recording. You could also adjust note lengths, the notes themselves (transposing), note velocity and record MIDI CC (control change) messages but that is beyond the scope of this tutorial. Your DAW may also have much more advanced quantizing features than I covered here, so I encourage you to look in your DAW's user guide.  

I hope you enjoyed reading this as much as I enjoyed making it, and feel free to let me know if any of my information is incorrect or could be improved.

Until next time...
Chris


Saturday, 1 February 2014

Microphone directionality and polar patterns

Hi I am Chris Farrell from Western Australia. The following lesson is for week 1 of Introduction To Music Production at Coursera.

Today I will be teaching you about microphone directionality and polar patterns and how they affect what sound is recorded from different directions. A polar pattern is essentially a diagram showing a microphone's sensitivity in picking up sound from different directions, and some of the more common polar patterns include:

  • cardioid (and variations including super or hyper cardioid): with this pattern, the microphone is most sensitive to sound coming from in front of the microphone
  • bidirectional (also called figure 8): with this pattern the microphone is sensitive to sound coming from in front of and behind the microphone, but it will reject sound coming from the sides
  • omnidirectional: with this pattern the microphone will pick up sound from all directions equally.

I have created three recordings using a budget condenser microphone, where during each recording I changed the polar pattern setting on the microphone. You will be able to hear how omnidirectional, figure 8 and super cardioid settings on my microphone affect how the sound is recorded from different directions.

Take a look at the diagram below to see how I set up the microphone in relation to a number of sound sources:
  • I stood directly in front of the microphone to record my voice
  • Directly behind the microphone I placed a metronome
  • On the other side of the room away from the microphone I placed a radio playing some music

Omnidirectional pattern


For the first recording I set the microphone to use the omnidirectional pattern. Omnidirectional means from all directions, so recording with this pattern should pick up a lot of ambient sound from around the room.



Take a listen and make a mental note of what you hear. 

Hint: Turn up your volume level so that you can just hear the radio music playing in the background of this recording. This will help you as you analyse all the recordings.



In the recording, you can clearly hear music playing on the radio, the metronome, and my voice doesn't sound as focussed as it should. The recording also sounds noisy as it is picking up other noises such as my computer's fan. This is all because using an omnidirectional pattern the microphone is picking up sounds from all directions. For a voice recording it would be better if the microphone was more focussed on the direction of my voice.

Figure of 8 (bidirectional) pattern


For the next recording I set the microphone to use a Figure 8, or bidirectional, pattern. Recording with this pattern, the microphone should pick up the sound directly in front and behind the microphone.



Listen to the figure 8 recording and hear how it is different from the first recording.



In the recording with the Figure 8 pattern, the level of the ambient sound and noise is considerably lower, and the metronome and my voice can be heard more clearly. This is because with a Figure 8 pattern the microphone is more sensitive to picking up sounds directly in front of and behind the microphone, and rejecting sound from the sides. This voice recording sounds much better, however it could still be improved!

Supercardioid pattern


For the final recording I set the microphone to use a super cardioid pattern. Cardioid and super cardioid patterns are unidirectional, meaning they will be focussed on picking up sound from directly in front of the microphone. Super cardioid has a narrower pickup to the front of the microphone compared to a cardioid pattern, but will also pick up a little from behind.



Listen to the final recording and hear the difference from the previous recording.



In this final recording, the ambient sound is a little quieter, the level of the metronome has dropped, and the sound of my voice is more focussed. This is because the microphone is rejecting more sound from the sides and behind the microphone, and picking up sound from directly in front of the microphone better.

With this budget microphone set to a super cardioid pattern, I have managed to isolate the sound of my voice to a reasonable extent without using an isolation booth. You can hear that by using a cardioid or super cardioid pattern it is much easier to isolate sound sources based on their direction – and this applies to recording any sound source in the studio such as individual drum sounds, instruments and vocals. Recording with an omnidirectional pattern is useful to capture ambient sounds such as a busy street, or to capture various sound sources from multiple directions at the same time. A Figure 8 pattern is useful for recording two different sound sources at once, such as two vocalists or instruments - by picking up sound in front of and behind the microphone and rejecting sound to the sides.

Thanks for listening!

I'd love to hear your feedback of my recordings and descriptions, and please let me know if I could have explained anything better or if any of this information is incorrect. I've never recorded a band before, so also feel free to share your tips about using different polar patterns while recording drums, instruments and vocalists.

I haven't run any effects over the audio recordings as I wanted you to be able to hear how the omnidirectional recording has much more ambient noise than the others. In a studio recording situation you would normally try to isolate a vocalist better, and you could also look at applying noise reduction in an audio editor to clear up low level noise. As it is a budget microphone recorded through a budget preamp it would be noisier than more expensive microphones, and I made no attempt to dampen other background noises such as my computer. Oh, and I should have used a pop filter!