Category Archives: Introduction To Music Production

Comparing the graphical interface of four different synthesizers

Hello, I am Brandon Fallout from Seattle, WA in the USA. This lesson is for week 6 of Introduction To Music Production at Coursera.org. I will compare the graphical interface of four different synthesizers. Clearly showing where the Oscillator, Filter, Amplifier, Envelope, and LFO sections are. If you missed last weeks post on Delay effects – The flanger and phaser, you can find it here.

 

Saving the best for last?

Welcome back everyone, as some of you know, this is the last week of our Coursera Introduction To Music Production class. It’s been one heck of a class with quite a bit of work due on all our parts. Just wanted to say a finial fair well and present you with this last lesson on synthesizer interfaces. Is this the end you ask? Nay, I say. I may not update this blog weekly like I have been but I will post new content on here as I discover it. If you’re interested, feel free to follow my posts or drop me a note or comment. With out further ado, onto the lesson!

 

 

TAL-NoiseMaker

TAL-NoiseMaker
You can grab yourself a free copy of this great synth by clicking on the image above!

With this synth it’s pretty easy to see where all the main functions are located. Under the blue section in the top middle we have the Oscillators under the OSC1 & OSC2 label. The Filter is  in the red labeled section, center middle. Over in the top right is the green section labeled Master that contains the Amplifier. The Envelope is located in the center right labeled ADSR. Last but not least are the LFO sections, you guessed it, top and center left labeled LFO1 & LFO2.

 

 

Oberheim SEM V

Oberheim SEM V
This is the Oberheim SEM V synth. It’s one of my personal favorites. You can pick up a copy for 50% off by clicking on it’s image.

This synth is also pretty easy to sort out where everything is at. The two Oscillators are under the Synthesizer Expander Module as well as the Filter. They are labeled VCO1, VCO2 & VCF respectively. The Amplifier is under the section labeled Output. The Envelopes are located toward the bottom of the interface just above the keys and the LFO’s are to the very left labeled SUB OSC & LFO2.

 

 

Sawer from Image Line

IL - Sawer
This can be acquired via Image Line’s website. Click image for more details.

Since the above two have been broken down pretty well in terms of location, I’m going to assume you get the idea. I’ll call out areas of the image to look in and you’ll find what the item says will be there.

Oscillators: Bottom Left

Filter: Right in the middle

Amplifier: Top right

Envelopes: Bottom center

LFO: Bottom Right

 

 

Wasp XT from Image Line

Wasp XT
Wasp XT can be picked up from Image Line as well. Click on image for more details.

This last synth is a little different than the rest. It actually brings 3 oscillators to the table as well a way to directly manipulate your envelopes. You’ll need to attach this to a midi controller or midi channel in order to get sound out of it. Not all synths have keyboard parts!

Oscillators: Top right

Filter: Top left

Amplifier: Bottom right

Envelopes: Dead center

LFO: Center Left

 

 

Final thoughts

Well class, it’s been a very interesting 6 weeks. If you’re reading this now, I’m assuming you didn’t give up and kept on keeping on. Good for you, that’s the spirit! Whether you’re taking this class for a hobby or something bigger, the key is to not give up. There are tons of resources out there to help you take it to the next level. You can check out reddit for more information. I am starting up a sub there called r/AllThingsSynthesis. Come check us out and bring your noise to the table.

Above all else, don’t listen to the haters and get your groove on. Thanks for taking the time to read this post, as I said above, this may be the last post for this class but it is not the last for this blog!

Brandon Fallout

Advertisements

Delay effects – The flanger and phaser

Hello, I am Brandon Fallout from Seattle, WA in the USA. This lesson is for week 5 of Introduction To Music Production at Coursera.org. I will talk about two delay effects, the flanger and the phaser. If you missed last weeks post on Ableton Live 9 – Adding and using a compressor on a track , you can find it here.

 

The Flanger

The flanger is actually the name of an effects unit that produces a flanging effect. This flanging effect is created by mixing two identical audio signals together. One of the signals is delayed a small amount and is gradually changed over time, usually under 20ms of Delay Time. This creates a sweeping comb filter effect.

The comb filter effect consists of peaks and valleys in the resulting frequency spectrum. These peaks and valleys are related to each other in a linear harmonic series. By varying the time delay, they will sweep up and down the frequency spectrum.  This can be controlled via a modulating LFO.

Here are some good examples of the flanging effect in use. Many large bands use this effect for a variety of reasons. One very popular use of a flanger is to get a jet plane noise or that white noise rush that’s ever so popular in Trance music.

Credit for image to soundsonsound.com
This is a flanger circuit. The delay portion of the circuit applies equally to the entirety of the signal it receives.

 

 

The Phaser

What can be said about the flanger in many circumstances can also be said about a phaser. Two of the key differences however, consist of the All-Pass filters and the type of comb filter that it can produce.  It’s also worth noting that phasers typically exclude devices where the all-pass section is a delay line. Those devices would usually fall under the flanger category above.

The part of the signal that is fed into the all-pass filter has it’s amplitude preserved but it’s phase altered.  The amount of change in phase depends on the frequency. Depending on the amount of All-Pass filters, you can affect several different frequency ranges at once. Once the processed signal gets mixed back in with the original signal, the frequencies that are out of phase will cancel each other out. This will create a comb filter.

The comb filter of a phaser, unlike a flanger, tends to have less uniformly-spaced peaks and valleys. The overall amount of these teeth in the signal tend to be smaller as well and less numerous throughout the mix. They also tend to be unevenly spaced and vary depending on the manufacture of the plugin or hardware.

The phaser effect is popularly used with the electric guitar to create otherworldly sounds. You can find a lot of phaser usage in styles such as Reggae, Funk and even 80’s Big Rock.

Credit for image goes to soundsonsound.com
This is a four-stage phaser. It utilizes four all-pass filters to delay different frequencies in the original signal by varying amounts.

 

 

Dry/Wet, feedback and the LFO

By utilizing a feedback function, part of the output signal is routed back into the input. This will produce a resonance effect that can further enhance the intensity of the peaks and valleys in the signal.

The dry/wet setting in your effects unit shifts the output of the signal from output = input at 100% dry to output = fully processed signal at 100% wet and any place in between. For example, a 50/50 mix of dry/wet would mix 50% of the original signal with 50% of the delayed or processed signal.

The LFO for the flanger and phaser works as a modulator. The flanger LFO can modulate the delay to create a sweeping like sensation. The LFO modulator sweeps those notches and peaks up and down the frequency range  by tweaking the All-Pass filters. This can create a spacey whoosh and swirl-like sound.

 

That’s all today kids, thanks for reading!

 

 

Credit goes to wikipedia and soundonsound.com for research and base material.

 

 

Ableton Live 9 – Adding and using a compressor on a track

Hello, I am Brandon Fallout from Seattle, WA in the USA. This lesson is for week 4 of Introduction To Music Production at Coursera.org. I will demonstrate the use of a compressor using a piece of my own music in the Live 9 DAW environment. If you do not have Ableton Live 9 you can download the demo here. If you missed last weeks post on recording how to make submixes in Ableton Live 9, you can find it here.

 

Adding a compressor to a track

In the video below, I’ll show you how to simply add in a generic compressor to your Ableton Live 9 track. It’s really pretty easy but I’ll show it just in case you’re not familiar with Live 9’s layout.

Since my working project for this class doesn’t actually use any live instrument recordings but rather just MIDI data, I have made a copy of my MIDI track and randomized the velocity data for each midi note. This is to emulate a live session where the velocity may be different per note on a real synthesizer. I have already recorded that data to an audio track that we will be putting the compressor on. Here is the video showing how to add a compressor to an audio track. I’ll also play the track so you can hear what it sounds like before I add the compressor.

 

Tweaking settings in the compressor to get the desired effect

As you can see and hear from the above video, some of the notes are a lot louder than the rest. I will set the threshold around the lower parts of the music to try and level out the sound of the track. The key is to use your ear and adjust to what you feel sounds best. I want the compressor to catch it when it goes over the threshold as soon as possible so I will be leaving my Attack at .01 and I will also be setting my Release to Auto which is based off of the incoming audio signal. I will set the ratio to 4:1 as it’s pretty high variance in the sound level. In addition, I will set the knee to help smooth out the transition and adjust the gain to an appropriate level. Last but not least, I will set the Dry/Wet to 100% (Not shown in the below video.) as I don’t want any of the original uncompressed signal coming through. Notice where it says GR in the video? That is visual feedback showing the compressor in action.

While this helped smooth out the level of the sound, it would have been a lot better if I recorded it right in the first place. I know, I know, I messed up the original track intentionally but you get my point. Without further ado, here is the video.

 

Hearing the difference in the mix

For my last trick, I will show you a video with it all mixed together while I add in more tracks and enable/disable the compressor.

I hope you have enjoyed this little adventure and hopefully have learned something. Thanks for reading and enjoy the video!

Brandon Fallout

 

 

 

Ableton Live 9 – How to make submixes

I am Brandon Fallout from Seattle, WA in the USA. This lesson is for week 3 of Introduction To Music Production at Coursera.org. I will be going over creating a submix in the Live 9 DAW environment. If you do not have Ableton Live 9 you can download the demo here. If you missed the last weeks post on recording MIDI and quantization in Ableton Live 9, you can find them here.

Submixes, what’s the big idea?

Submixes are primarily used to simplify the task of mixing a selection of tracks. Additionally, if you have an effect such as a compressor,  you could use it to apply to the submix as a whole instead of each track. This would allow you to save a bit of computational resources when dealing with effects. If you have an aging computer or a lot of VSTi’s, effects, etc., this could really help smooth out your performance. It also gives the added benefit of controlling the one fader for any tracks assigned to that submix!

 

Help, I missed the buss!

As it turns out, Ableton Live 9 doesn’t actually have submix busses in the traditional sense but rather allows you to group track together to achieve the desired effect. Another method of creating a submix is to create an empty audio track, rename it something such as submix or buss and route other tracks to it. I will cover both methods in the videos below. While there are a few other methods for buss emulation, they tend to be a bit different than a submix and are outside the scope of this post.

 

Grouping the Tracks

This is considered easiest method to set up.

  1. Select the first track that you want to be in a submix.
  2. Control click to select other tracks until you have all the ones you want in your selection.
    1. If all of your tracks are already adjacent from one another, you can select the first one and hold shift down and select the last one. This will grab everything from the first track to the last track in the selection.
  3. Right click and select Group Tracks to group them. A new group track will appear with the other tracks assigned to it.
  4. Make sure to rename the group track appropriately.

Here is a video outlining the steps above, plus a bit of showing them in use.

 

Routing tracks to an empty audio track

You can also create a submix by creating a new audio track and routing other tracks to it. Something of importance to note: If you have the live lite edition of the software, using this method will eat into your total number of tracks. If you use the above method however, the submix won’t count against you.

  1. Create a new audio track and rename it to buss, submix or some such.
  2. Enable the I/O routing section.
  3. In the I/O section of each of the tracks to be submixed, change Audio To from ‘Master’ to the submix track created in the first step.
  4. In the I/O section of each of the tracks to be submixed, change Audio To from Master to the submix track created in step 1.
  5. In the I/O section of the submix track, set Audio From to No input and Monitor to In.

Here is a video outlining the steps above plus a bit of showing them in use.

 

Bonus material

A fun fact, the track I demoed in the videos was completely based on last weeks horrible MIDI performance. I used that recorded MIDI data to play everything you heard in the videos. You can check out last weeks MIDI recording train wreck here.

As an added bonus, I will show you how to add an effect or two to a submix and how let you hear the end result.

Enjoy, and thanks for reading!

Brandon Fallout

Adding a software instrument, recording MIDI and quantization in Ableton Live 9

I am Brandon Fallout from Seattle, WA in the USA. This lesson is for week 2 of Introduction To Music Production at Coursera.org. I will be going over adding a software instrument to the Live 9 DAW environment as well as recording MIDI and showing how to quantize it. If you do not have Ableton Live 9 you can download the demo here. If you missed the last weeks post on sound frequencies, you can find it here.

Preparing the tracks

Here is what Ableton Live 9 looks like when you start a new project. As you can see, there are 2 MIDI and 2 Audio Tracks. You normally start in the “Session” view but you can easily change to the “Arrangement” view by pressing the TAB key on your keyboard or clicking on the grey circle with the 3 horizontal lines in it in the top right hand corner of Live 9.

Once the “Session” view is applied, go ahead and remove all but one audio track. It should be noted that this is not a requirement but rather a simplification of the workspace. You could just as easily leave those tracks in if you felt you needed them for later.

Adding the instrument

Next up is adding in the VST. Make sure your “Browser” view is showing and go into the “Plug-ins” selection. If you have VSTs already installed on your machine, you should see a few listed in the right hand column next to the “Browser” menu. For this lesson, I will be using the Analog Lab VST that came with my Arturia Analog MiniLab MIDI Controller. Simply grab the VST in question and drag it over to the “Mixer Drop Area” of Live 9. As you can see in the video, once I drop the VST under the first audio track, it adds a second track for my MIDI instrument.

Setting the click and the count-in

This part is pretty easy once you know where to look. In the video I’ll be showing you, we’ll locate the click (or metronome) and learn how to enable it as well as how to set the count-in. Take note, down in the left hand corner of Live 9 lives the “Info View.” When you hover your mouse over different areas of the DAW, it will display information pertaining to that location or setting in the DAW.

As you can see from the video above, simply click on the metronome button to enable it. You can ether right click or use the down tick on the  same button to access the count-in function. You can set it to an amount that you feel comfortable with from 0 to 4 bars.

Recording the MIDI data

Now for the MIDI data recording. As you can see from the previous video, the track “Arm” button is already armed for the new instrument track. If yours is not, please make sure to arm it before moving ahead. I am also going to change the “Input Type” of the target track to, in my case, Arturia MiniLab to isolate that track to that specific controller type. If you are using just a VST with no hardware component, you may wish to change this to computer keyboard or use the “Config Input” option to sort out how you will be playing your virtual instrument.

It’s a good idea to click on the little wrench icon called “Plug-in Edit” for your VST. It’s down in the left hand corner next to the “Info View” box. Once the VST gui is loaded, you should be able to change parameters and pick the samples you are going to use. This is solely dependent on your VST and may not be included in the plugin of your choosing.

Now you’re ready to record that MIDI data into your MIDI track! Make sure you have your metronome turned on and your count-in set the way you want and that you also have hit the square stop button twice so that you start off at the beginning of the track at bar 1.1.1

Quantization

Now, time to quantize that horrible MIDI recording. I could have set this to auto quantize from the “Edit” menu under the “Record Quantization” option but opted to do this one by hand for the sake of this lesson.

Here I’m going to set the grid to 1/16th, fold the grid view to only what I am using in the piano roll, edge edit the track to a good starting and ending point and then last but not least, apply the quantization. I will be using 100% for my quantize settings as this will mainly be for an electronic style of music. Generally speaking, the tighter the better for EDM.

To pull up the panel in Live 9 to do just that, simply double click the MIDI track. Another option if you already have the track selected is to press ctrl + shift and it will take you to the same location.

I didn’t include all of the edits that I made to this track in the video due to file size and time limitations on YouTube. The above should cover enough to get you started and any more on editing is outside the scope of this lesson.

Here is that not so great performance before quantization.

 

Bonus: Recording MIDI date to an audio track

An important thing to remember is that the data in that track we just made is only MIDI data and not actual audio data. In order to turn it into a full-fledged audio track, we’ll need to record it to a new track. This is the reason we saved the first Audio 1 track in the set.

I’m going to go ahead and skip the full recording how to and just show you in the video. It’s very similar to what we have already been over. Just arm the audio track, make sure your bar is at 1.1.1 and go ahead and turn off the metronome as it’s not really needed here. Setup your audio track to grab from your MIDI track and hit record.

 

And here is after. Amazing what a bit of editing can do!

 

So long and thanks for all the fish!

Seriously though, if you have made it this far, thank you for reading my blog post. I hope you found it informative if nothing else. Please feel free to leave a comment at the bottom with your thoughts or any corrections you feel that should be made.

Have fun and rock on,

Brandon Fallout

Audio frequency, what’s it all about?

Hello, I am Brandon Fallout from Seattle, WA in the USA. Welcome to my newly minted blog. This lesson is for week 1 of Introduction To Music Production at Coursera.org. I will be going over audio frequencies and how they apply to different hearing ranges.

Can you hear that?

An audio frequency is a periodic vibration that oscillates within the audible range of the average human. It is generally agreed upon that this range is from 20hz up to 20khz and is also the property of sound  that most determines pitch. It is interesting to note that the term audio frequency is based on the human hearing range and not other species. This could be useful to know if composing pieces for specific species such as Elephants (slightly lower range at 16hz-12khz ), birds (parakeet @ 200hz-8.5khz ) or whales (beluga @ 1k-123khz).

 

20Hz to 20kHz (Human Audio Spectrum)

 

Frequencies and descriptions

Audio frequency

Above is a diagram of frequencies with descriptions from Wikipedia tailored to human hearing range. The 20hz – 20khz figures can change based on age or  possible hearing loss due to prolonged exposure to high decibel noise.  Such loss would result in higher frequencies not being heard by the listener.

 

 More information along with sources:

Final thoughts

Putting this blog together was an interesting learning experience and seems like the perfect medium to get musical ideas across to you, the reader.

While doing some research about audio frequencies, I thought it interesting that the information provided by various sources were based solely on human hearing.  I know it seems a bit odd to think outside of that parameter when working on a composition, but why not? I have heard of people writing music for specific species of animal and thought it might be neat to explore what effects music would have on other animals if it was written with their hearing range in mind instead of our own. Just thought I would point that out as it’s an  easily overlooked subject.

I have also embedded a YouTube link that goes through the 20hz-20khz range. I was surprised to find out that my personal hearing range is around 100hz to a bit over 15k. I’ll have to test this again in an isolated environment with some good headphones to see if those numbers change. I find it’s always interesting to learn something new about ones self.

 

Until next time

If you have made it this far then you have read my small blog in its entirety. Thank you for taking the time to do so. Please feel free to leave a comment if you feel compelled to do so, critiquing or otherwise.