Basics to sound engineering

Page 1

1


Index 1.

Frequency.................................................................................................................................. 3

2.

Equalizers .................................................................................................................................. 8

3.

The Microphone ...................................................................................................................... 14

4.

Audio Reverberation ................................................................................................................ 20

5.

Basics of Audio Compression ................................................................................................... 23

2


1. Frequency Frequency is known to be the number of occurrences of a repeating event per unit time. When it comes to mentioning audio frequency, it is considered to be the frequency of the human’s audible range, which is from 20Hz to 20 KHz (20,000Hz)

Shown above is an X-Y graph of the human audible range. As we can see, the range starts at 20 Hz and ends at 20 kHz, which is as mentioned the human hearing range. Supposedly, this graph should be a flat graph which represents the relation between the audible frequency range and its sound pressure level which is measured in decibels (dB) Sound Pressure level or as known SPL is defined to be the level of the local pressure deviation from the ambient atmospheric pressure, caused by a sound wave. In other words, it is the way of measuring the sound pressure coming from the source to a certain defined point using the inverse square law 1/r². Simply said, the inverse square law it to be the equation used to measure the dBSPL (decibels of sound pressure level) from source point to a certain distance. In other words, each time the distance from the source point is doubled, the volume of the sound decreases by 6 dB and vice versa. So, for example, if a point source starts as with 120 dBSPL at 1 meter, which is considered the threshold of the human hearing, at which the human’s eardrum explodes, at a distance of 40 meters, the sound pressure level will be calculated as 87.96 dBSPL. This study is mandatory for day to day professional live sound reinforcement engineers in order to make sure that the last person from the audience will be able to hear the sound with 3


both good quality and volume. As well as, that the first person in the audience will not experience fatigue or hearing trouble after the certain event one is in. Frequencies also are the main reason different instruments exist. When doubling the frequency the music note stays the same but with higher pitch. This is why we hear different instruments with different sounds not all with the same sound. Putting into consideration that resonance from different music instruments has a character of the instruments’  sound change. In order to work as a professional sound engineer either for live events or studio recording, mixing, and mastering, one should have a basic knowledge of the frequencies of the instruments one will be facing in his career. Therefore, in the article, I will show the different frequency ranges of different instruments in a basic band including drums, 4strings bass guitar, electric guitar, keyboard, and a vocal. Firstly, Drums includes a lot of items such as bass drum, snare, hi tom, mid tom, low tom (floor tom), and cymbals, which vary a lot so we cannot identify a typical range for them.

4


Bass-Drum: Normally a bass drum is the lowest frequency in a band. Surely its frequency can vary depending on its size, type of wood, and type of the rim on in. But, in order to get a good boomy sound out of it, one should be working from 50 Hz until 200 Hz maximum considering that it does not overlap with the bass guitar. Snare: Working with the snare’s frequencies is a bit tricky as we have to work on some of both the low and high frequencies in order to get both sounds produced from the snare, the fat one and the crispy one. Therefore, a snare would get a great sound for the low part around 300 Hz until 800 Hz. And, for the crispy sound we can work around 5 kHz to 6 kHz. Hi-Tom: The Hi-Tom is the most treblish Tom in all of the Tom-Toms. Also it is placed above the bass-drum on the left for a right handed drummer. Its dominant frequency is around 300 Hz to 450 Hz. Mid-Tom: The Mid-Tom is the next treblish Tom. It is located next to the hi-tom over the bass-drum. Its frequency range response is, as well, in the middle between both the hi-tom and the floor-tom. Its frequency response is around 200 Hz to 350 Hz. Floor-Tom: The Floor-Tom is the one that has the lowest frequency. It is located on the right side of the right-handed drummer. Its frequency range is between 100 Hz to 250 Hz. The Bass Guitar is considered as a semi-rhythmic instrument. It is a stringed instrument that has part of the rhythmic section. A standard 4strings bass guitar with 24-frets has a frequency range of 40 Hz to 400 Hz. For the open E note on the first string would be equal to 41.2 Hz and the last note G would have a frequency of 391.995 Hz.

5


The Keyboard/Piano is the strangest instrument of all. It has the widest range of frequency among all existing instruments. A 7-octaves piano or keyboard is considered to have a frequency range of 27 Hz to 4.2 kHz (4186.01 Hz to be exact). As known, the piano has been created by Pythagoras according to the distribution of frequencies he discovered.

The Electric Guitar is a 6-strings instrument with a higher pitch than the electric bass. It has thinner strings which makes the string, when plucked, resonates faster than the strings of the bass guitar. Therefore the vibration of the strings is faster, which produces a higher frequency. The electric guitar has a range of 80 Hz to 1.3 kHz (1320 Hz) with the lowest E note is to have a frequency of 82 Hz. And, the highest E note is considered to have a frequency of 1320 Hz (1.32 kHz).

6


Last but not least, the vocal, which will be depending on the human vocal chords. The fundamentals of a human voice range is between 80 Hz and 1100 Hz (1.1 kHz)

All these instruments frequencies are fundamentals. Each note played in each instrument produces the fundamental frequency in addition to even and odd harmonics called Overtones.

7


2. Equalizers Is filter that allow the user to change the frequency response (control the tone), and it’s commonly used to make up certain frequency or to make a new sound out of the recorded material -The Basic Component: You can perceive the Eq. as set of filters that you control to allow some frequencies to pass and the rest will not. There are three main types of filters: -Low Pass Filter: Is a filter that let all of low frequencies to pass, and it attenuates high frequencies that are above certain cut off point

-Band Pass Filter: It can be perceived as both low pass and high pass filters working at the same time to allow only region of frequencies to pass between the selected points

8


-High pass filter: It’s a filter that allows only high frequencies to pass and it attenuates the lower frequencies that fall below the cutoff point.

-Shelving filters: There are other types of filters called shelving filter and it cuts or boosts all frequencies equally that are above or below desired cutoff point, and there are two types of it high and low shelving. -High shelving: It cuts or boosts the frequencies in the high range

9


-Low shelving: It cuts or boosts the frequencies in the low range

Shelving filters are commonly used in the cases of overlapping between frequencies of the instruments.

-here the is masking or overlapping between the guitar (there is more low frequency than it should be, and vice versa in the case of the bass) so we use high shelving to the bass so we cut all of the extra high frequencies and low shelving to the guitar so we would cut all of the extra low frequency that exists in that track. After using the shelving filters the freq. response graph would be like that

10


-Peaking filter: Is used to boost or cut specific freq. area it’s used to add more presence to vocal or something

Now we know all the types of filters which is the basic idea of the equalizer, the main three types of equalizers are Graphic, Parametric, and swappable. I will discuss each type of them separately. -Graphic Equalizer: The graphic EQ is a group of peaking filters that are tuned to certain frequencies (bank of filters) and the amount of positive or negative gain is measured beside each knob in decibels. In most of the graphic EQs when you cut or boost any frequency it affects the frequencies that are beside that frequency. Graphic eqs are commonly used in live sound as they are easy to use.

11


-Parametric EQ: They are multi band variable eqs which allow you to have almost full controls over the three primary parameters which are, amplitude, center frequency, and bandwidth(Q) that can either be narrowed or widened, they can be used to make more exact adjustment to the sound. They are commonly used in recording and live performances, they can be sold as standalone units, and they also use sweepable filters

Semi Parametric (sweepable): It allows the user to control the amplitude, and freq. but they use a preset bandwidth of the center freq. in some cases they make you choose between wide and narrow bandwidth (it's a preset also)

12


In conclusion, there are many types of equalizers (like the electronic plug-ins) the most important thing about them is to know when to use them, so we don't make the mix sound strange. We either use them righteously or in their place or not to use them from the beginning until we know when and how to use them. Because people treat EQs and compressors as a must use thing at any mixing process, a matter of fact you do not decide, but the sound is the one who make the decision. It's not math, its music that has feelings and meaning, listen to it carefully, and then feel it. After that, make your decision whether you need to use EQ or not, do not sit in front of the console if you are intending to use certain plug-ins and effects, it's not us who decide, it's the project we’re working on and its type.

13


3. The Microphone The microphone is the main input of any sound source into the sound system. The idea that falls behind the microphones is the same idea of the ear itself. It is considered to be an acoustic to electric transducer that converts the sound from its pure energy to an electrical energy that could be transferred through specific wires and cables. The first microphone created was invented into the Wente of Bell Labs in New York, 1916. As mentioned before the microphone does almost the same exact idea like the ear? The ear is the human’s way of listening. It is composed of three different parts. The Outer ear’s main role is to capture the sound that is sent directly to external auditory canal. This canal ends with the eardrum. The eardrum is the most important element that allows humans to hear, because, when the sound collected from the outer ear hits this drum, it vibrates. This vibration is converted into the inner ear into signals and sends these signals to the brain through nerve cells. Microphones are likely the same. Normally each microphone should have its main component which is called an “element” or “capsule”. This capsule’s role is to get the sound out of its’ original source and sends it to the converters inside the microphone, which in the ear’s case could be considered as the nerve cells. There are different types of microphones. Each type of microphone has its own characteristics and components that totally differ from the different types.

14


The Condenser Microphone:

The condenser microphone was the first microphone created. As mentioned before, it was created in the Wente Bell Labs in 1916. It is considered the primary microphone in any recording studio. The condenser microphone is composed from two thin plates which receive the sound waves. These plates have a voltage between them. One of these two plates, with the thinner material, acts as the diaphragm. When struck by the sound waves, the diaphragm vibrates, which decreases the distance between the two plates. At this specific point, an increase in the capacitance happens which leads to the occurrence of a charge. This charge is sent through the coil. And this is how the sound goes to the device which is connected to the microphone. Therefore, this type of microphone always needs electricity in order to do this process. This is why when connected to a mixer, a phantom power is needed. The phantom power is basically 15


the idea of providing 48 volts electricity from the mixing console to the microphone. Other condenser microphones just includes batteries inside of them; most likely AA batteries.

The Dynamic Microphone:

Unlike the condenser microphone, the dynamic microphone is the best to use on stage in a live concert, as it is tougher and could handle shocks. Also, the dynamic microphone is not as sensitive as the condenser microphone in capturing details from its capsule. The dynamic microphone is based on a simple idea, the coil and magnet. The diaphragm of this microphone is attached to a coil. This coil spins 16


around a magnet. When the sound waves vibrate the diaphragm, the coil moves forwards and backwards which generates a current that goes into the wire directly to the device attached to the microphone. This microphone is capable of handling powerful sound pressure than the condenser microphone. Moreover, it does not need any batteries as the current happens from the friction between the coil and the magnet.

Mentioned above are the two most commonly used microphones in studios and in live concerts. There are different types of microphones such as the piezoelectric microphone, the fiber microphones and the carbon microphones. All these types of microphones have different frequency responses. This means that each microphone, even from the same type, have his own availability to capture certain frequencies and other not, depends on the usage of the microphone itself. Some microphones are used for bass drums, for example, which require them to capture low frequencies rather than high ones. These microphones could not work with a violin as it requires more of a high frequency rather than low ones. Moreover, all microphones have their polar patterns. The Polar Pattern is to be defined as the directionality form which the 17


microphone will capture the sound. There are different types of patterns as well.

As shown above, the most used polar patterns of the microphones. The Omnidirectional pattern, as seen, allows the microphone to capture the sound from a 360째 angle. This pattern is found in the neck-microphones used in talk shows or the body microphones used in cinema recording on location. The Subcardioid pattern is the one between the cardioid and the omnidirectional. It is almost an omnidirectional but it captures the sound more from the front and the sides rather than the rear of the microphone. The Cardioid pattern, which is considered the main pattern of most of the microphones, captures the sound from the front and sides only. Eliminating the rear capture from the microphone, this 18


microphone is commonly used in professional studio recordings and live concerts as its main capture point is the front. This is called cardioid as this pattern looks like an inverted heart shape which makes the name come from the word “cardio” which means heart. The Supercardioid pattern, it is exactly like the cardioid pattern but with the small rear capture as shown above in the picture. As well as the supercardioid, the Hypercardioid pattern, has the same characteristics but with allowance to capture more sound from the rear. The Bi-directional pattern, called also the Figure-8, is used in a conversational studio recording, giving opportunity to two vocals to sit in front of each other’s and have a conversation which could be recorded from the front and the rear of the microphone equally without capturing sound from both sides. Last but not least, the Shotgun pattern captures minimal sound from the sides and the rear. It captures sound mainly from the front with a very small angle. It is commonly used with what so called shotgun microphones or boom microphones. These types of microphones are used ONLY in on-location cinema recordings using them to capture dialogs from different actors when the microphone is directly pointed at the actor’s mouth.

19


4.

Audio Reverberation

it's the way sound waves reflects of many surfaces before reaching your ear, which make the sound take longer time to reach the listener, and it got quieter. As it’s shown in the picture, the listener hears the sound from the original source directly followed by many sound echoes.

They started using it in music is 1947 by Mr. Bill Putnam Sir who used the studio bathroom as the first reverb chamber Reverb Chamber: -it's a chamber that's built with non-parallel surfaces and apply shellac to make them reflective, usually more than one microphone is installed in that room to pick up the reflections of the room.

20


Basic Simulation of a room: Early reverb: And it's the first thing that the listener hear directly after the direct signal, and it takes a longer path

Pre-delay It’s the time between the arrival of the direct signal to the listener and the start of the reverb effect (this parameter is used in many digital reverb effects and its measured in milliseconds) Reverb Time It’s the time between switching any sound source off and the level of the reverb resulting from that sound dropping by 60db and its also called rt60

Common parameters o Early reflections It allows the human brain to identify the room size, they are so important part of the reverb effect (if you are simulating a room is your goal)

21


o Plate Reverb: -it's made by German company called EMT (Elektromesstecknik) in 1957 and it was a great discovery back then because it made anyone can use reverbs in his music without using that room. The plate is made of electromechanical transducer, somehow similar to the driver in a loudspeaker which creates vibration in a large plate of sheet metal. After that the pickup captures the vibrations that occur across the plate. Early plates had only one pickup (mono output), Later on they used 2 pickups for stereo uses. Reverb time can be adjusted using a damping pad, made from framed acoustic tiles (closer the pad, the shorter the reverb time, although it never touches the plate) the newer versions had a remote control

o Spring Reverb: -it's sort of a device that make reverb by inducing sound vibrations at one end and its coiled into a spring shape to make the space it takes up, another transducer at the far end to pick up the vibrations. When the waves reflect back and forth from one end of the spring to the other one the reverb effect is produced o Reverb Plugins: And it’s commonly used in DAW software systems and they use the same parameters the other types use. 22


5. Basics of Audio Compression What is the job of the compressors and why do we use them? -Compressors job is to limit the sound when it exceeds certain point (the threshold) and raise it if it's under a certain point. Which enables us to control the dynamic range of the music, they can also be used to make a track more natural without having any distortion, but if you misused it your music will be colorless. - We use them to somehow limit and control the dynamics but not to kill the dynamics of the track. *Common Compressor controllers: There are common controllers and parameters that are used in most of the compressors, and it depends whether its plug in or hardware unit. -Threshold: Is the level that you set for compression, in other words if the level exceeds that certain point, it will be compressed. For example if your threshold is set at +2db, signal that extends that level will be compressed. -Knee: Refers to the way compressor use to move between the noncompressed and compressed signals, most of them will offer one. While some of them give you the opportunity to move between soft knee (move gradually and smoothly) or, hard knee.

23


-Attack time: It refers to the time signal takes to be fully compressed after exceeding the threshold. From 20 and 800 microseconds are considered fast attack, and from 10 to 100 milliseconds are considered slow attack

-Release time: It’s the complete opposite of the attack, which means the time that signal takes to become uncompressed again. Release times are somehow longer than attack times and it take from 40-60 m/s to 2-5 sec and it depends on the unit. Most of the units try to make the compression times as short as possible to avoid pumping effect which is made due to the activation and deactivation of compression.

-Compression Ratio: it refer to the amount that should be compressed from the dry signal, and its expressed in decibels.1:1 means no compression shall be occur ,2:1 means that any signal that exceed 2db will be compressed. Infinity means that all the signals will be fully compressed or you can call it limitation.

24


-Output Gain: Is the compressed signal and we using it to know the percentage of the changes that happened to the signal?

-The Big Four Common Types of Compressors: 1-Tube Compressor: -is the oldest type of compression, they tend to have slower response, slower attack and release, than the other compressors. They tend to give vintage sound or coloration. Because of that it’s better to use them with violins, bass, soft human sound, or anything that have a slow attack. 2-Optical Compressor: -this type of compressors change the dynamics of the audio signal using a light elements and optical cells.

3-FET Compressors: -Field Effect Transistor: they are a type of compressors that tries to imitate the sound of the tube compressors, but with transistor circuits. They are fast, clean, and reliable and for sure cheaper than the tube compressors.

4-VCA Compressor: -Voltage Controlled Amplifier, and they are a compressors that use solid state or integrated circuits, they are totally cheaper than tube or optical compressors, they have less coloring than the other compressors. It’s the same argument between digital vs. analog recording.

25


26


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.