Sunday 7 October 2012

Production Techniques

In a post production, sound designers will take a raw footage from a principle shooting, This is the actual filming of individual scenes, without any special effects or musical background, and turn it into  a finished motion picture adding the sound effects and musical backgrounds to create an emotional effect whether it be dramatic or comical.


In film and TV, the audio portion of a project is recorded separately from the video. Unlike your home video camera, the film or video cameras used in professional productions don't have built-in microphones. Instead, all dialogue is recorded with either a boom or a tiny, wireless lavalier mic that can be hidden in an actor's clothing. Most other audio, like ambient background noise and music is added in post-production.

Post production refers to all the editing, assembling and finalizing of a project once all the scenes have been shot. Audio post production begins once the editors have assembled a locked cut of the project. A locked cut of a film contains all of the visual elements, selected takes, special effects, transitions, graphics that'll appear in a film's final cut.

With the locked cut in hand, the audio post-production staff can start spotting the film for sound. Different members of the post production team look for different things:

  1. The dialogue editor examines every line of spoken dialogue, listening for badly recorded lines (too quiet, too loud or jarbled, e.t.) or times when an actor's voice is out of sync with his lips.
  2. Sound effects designers look for places where they'll need to add ambient background noise (honking cars in a city, tweeting birds in the country), and "hard effects" like explosions, doors slamming and gun shots.
  3. Foley artists look for places to fill in details like footsteps across a wood floor, a faucet running, the sound of a plastic cup being placed on a marble counter top, e.t.
  4. The music editor looks for inspiration to either commission original music or buy licenses for existing song use.
  5. The composer, if he's already hired, looks for places where original music would add to the on-screen moment.
If the dialogue editor needs to replace or re-record unusable pieces of dialogue, he'll ask the actors to come in for an automated dialogue replacement (ADR) session. Here, the actors and editors synchronize the newly recorded dialogue with the lip movements on the screen and mix the audio smoothly into the existing recording.

Foley artists, named after the pioneering audio and effects man Jack Foley, use an eclectic bag of tricks to reproduce common sounds (a wooden chair for a creaky floor, cellophane for a crackling fire, a pile of audio tape for a field of grass, e.t.)

Sound designers and effects editors spend much of their time collecting libraries of ambient natural sounds. They record the sound of Monday morning traffic and save it as a digital file for later use. They record washing machines running, children playing and crowds cheering. You can also buy ready-made libraries with all of these sounds. But some of the best sound designers like to create entirely original effects.

Film and TV editing is an entirely digital world. No one sits around splicing film stock anymore. Even if a project is shot on film, it'll be digitized for editing and laid back onto film for distribution. The same is true for audio post production. The nice thing about digital audio editing technology is that there's a product and system for every budget and skill level.

For the home studio, everything can be done on a single computer without fancy control panels or consoles. You can buy a basic version of Pro Tools, Adobe Audition or a similar digital audio workstation and do all your recording, editing, mixing and exporting using the software's built-in functionality. Pro Tools doubles as a MIDI (Musical Instrument Digital Interface) sequencer, so you can even record a soundtrack straight into the software using a MIDI controller or live instruments.

Professional audio post production studios add another level of control by using large digital editing consoles. All of the knobs and faders on the console control specific elements within a DAW like Pro Tools or Nuendo. For many editors, it's faster and easier to manipulate knobs and faders by hand than to constantly be reaching for the mouse and keyboard.

Here are some features of DAW software for audio post production work:

  1. It handles an unlimited amount of separate tracks for the same project. This is especially advantageous in mixing a big project with different Foley recordings, sound effects, dialogue, background noise, music, e.t. All sounds can be loaded into a sequence.
  2. Tracks the audio to a built-in video feed. This is critical for timing the placement of effects and music.
  3. DAW Allows for tons of different automated pre-sets. Each separate audio recording session requires different levels on each track to create a balanced recording. DAW software makes it so you only have to set those levels once. Once they're saved as pre-sets, you can just click a button and return to the desired settings. This works with the large consoles as well. Click a button and all of the knobs and faders will return to where they were two Wednesdays ago.
  4. Cleans up bad recordings. Maybe a plane flew overhead when your hero was saying his big line, or the air conditioning unit in the grocery store was buzzing too loud. DAW software includes special filters and tools for cleaning up clicks, pops, hums, buzzes and all other undesirable background noise.
  5. Endless plug-in options. Plug-ins are small software add-ons that allow for additional tools and functionality. They can be special effects plug-ins, virtual instruments for scoring a movie, or emulators that reproduce the sound of classic analog instruments and equipment.
  6. Graphic interfaces for placing sound recordings in the 5.1 surround sound spectrum. By moving a cursor back and to the right, you can make it sound like a train is approaching from behind the audience.
Sequencing is putting all part of a song together, in this case for sound designers working in post production sequencing will be putting all the sound effects, Foley effects and background noise and music together and time it with whats going on in the footage, By sequencing these sounds you move them around within the realm of the footage to arrange the sounds in a way you want them to come across.

Synthesis and Sampling:

First, I will begin with a Synthesizer, Synths can come in a s a hardware device or software plug-in on your DAW, these days people usually just go for the software as it does not take up any space and is easily accessible. Synths are used in music to create and manipulate sounds, it can also be used to create sound effects for TV and movies as you can create any type of sound you want using a synthesizer if you know how to work it.

A synthesizer consist of three main sections an
Oscillator - this is what creates the sound, here you have a choice of what sound wave to use.
Filter - this is where you can alter the frequencies of the sound and experiment on creating weird effects
Amplifier - this controls the volume of the sound.

There is also the envelope section and ADSR section which you can use to also manipulate the sound as well as the effects section where you can add a chorus or delay effect to create the sound you want.  


Sampling, the art of triggering a sound clip to a backing beat or tempo, can be implemented in many ways during the songwriting process. It can be used to create drum kits or digital instruments or insert pieces of another recording into your song, or it can be used to destroy a clip altogether for the sake of creating an original noise.


Synthesizers are almost always used in Sci-Fi and horror films because they can produce otherworldly sounds. But for straightforward emotion, horns are used too. These are associated with pageantry, the military, and the hunt, so they are used to suggest heroism. Movies featuring death-defying heroes such as Star Wars and RoboCop use a lot of horns.




Sound sampling is a way of converting real sounds into a form that a computer can store, and replay. Natural sound is in analogue form. Analogue means that something is continually changing, or to put another way, that it has no definite value. Sound waves are a subject on their own, but you should know that sound has a frequency. This frequency dictates the pitch of the sound we hear. This frequency is measured in Hertz (Hz). If the sound oscillates at 50 times a second, then its frequency is 50Hz, and so on. The higher the frequency, the higher the pitch of the sound.

A software sampler is a piece of software which allows a computer to emulate the functionality of a sampler.
In the same way that a sampler has much in common with a synthesizer, software samplers are in many ways similar to software synthesizers and there is great deal of overlap between the two, but whereas a software synthesizer generates sounds algorithmically from mathematically-described tones or short-term wave forms, a software sampler always reproduces samples, often much longer than a second, as the first step of its algorithm.


You can combine Synths and Samples together to create ambient backdrops which can be used in films
http://audio.tutsplus.com/tutorials/production/how-to-combine-synths-and-samples-to-create-ambient-backdrops/

Equalizing:


An electronic device or piece of software that alters sound waves is known as a signal processor. One very common signal processor is an audio equalizer. An audio equalizer raises and lowers the strength of a sound wave. The goal of equalization (EQ) is to help achieve a good mix of sound that allows all instruments and vocals to sound good together.

Equalization can target part of a sound based on the frequency amplitude, or height, of the sound wave. For example, if the bass drum is drowning out the cymbals in an audio mix, an audio equalizer can make the cymbals sound louder. In this case the sound engineer will choose to raise the strength, or gain, of the high frequencies that make up the cymbal’s sound. The engineer may also choose to decrease the gain of the very low frequencies in the bass drum track.

Removing sound is another equalization goal. A bass drum microphone may also pick up and record sounds from the cymbals. The problem of recording unwanted sounds is known as bleeding or leakage. To get a cleaner bass drum track, an engineer can use an audio equalizer to lower the high frequencies on the bass drum track. This effectively removes the cymbal leakage.

An audio equalizer can be part of an audio mixer, a stand-alone piece of electronic hardware, or a software application. Audio equalizers inside a mixer usually have controls for three bands of frequencies including high, mid-range, and low. These equalizers make it easy to use EQ during the recording process.


Several varieties of stand-alone audio equalizers can be used that target sounds based on different characteristics. A sound is generally made up of a range of frequencies known as the bandwidth. The centre frequency is in the middle of the bandwidth. A peaking, or parametric, equalizer includes controls that can affect a sound wave’s gain, bandwidth, and centre frequency.

A graphic equalizer usually includes several controls, or sliders, to manipulate several frequency ranges. These equalizers also illustrate sounds levels with a row of lights for each frequency range. These lights make it easier for an engineer to see which frequencies need to be adjusted to get a good sound mix.
Specialty software applications, often called plug-ins, that perform EQ are also widely available. Usually, the EQ software works with, or plugs in to, a larger sound recording application. The engineer can use an audio equalizer plug-in on a certain track, part of track, or all of the tracks in a recorded song.

All signal processing adds noise, or unwanted sound, to an audio track. For this reason, engineers may want to limit the amount of equalization needed during the mixing process. In place of EQ, an engineer can try to achieve a better mix of sound during the recording process by using different microphones, moving microphones, or recording various instruments during separate recording sessions.



In film sound, the sound designer matches sound to the look of the film. A sad movie has mood lighting, and the sound will be designed to match it in emotional tone. Its dialogue is EQ'd less crisply, with a lower-frequency boost.

In a happy comedy, lower frequencies are rolled off, and it's EQ'd and mixed to be "brighter."
Film sound is "sweetened" by manipulating room tone, premixing audio levels, and carefully considering dialog, music, and effects for their proper audio EQ.

Film sound expects post-production sweetening, which makes film audio sound so different from audio for video. Video sound can be sweetened, but Indies use it pretty much as it is recorded.

EQ can also alter the frequencies of the human voice to make them sound like they are on the phone which would be good in a scene where a character is on the phone and you hear the voice of the person on the other end. (Example of this in previous post in a scene from the Matrix).



Use EQ to replace missing bass or treble (by using the high and low shelving controls), reduce excessive bass or treble, boost room ambience (high frequency shelf), improve tone quality (using all the controls), and help a track stand out in the mix (by using the parametrics . An instrument's sound is made up of a fundamental frequency (the musical note) and harmonics, even when playing only a single note, and it is these harmonics that give the note its unique character.  If you use EQ to boost the fundamental frequency, you simply make the instrument louder, and don't bring it out in the mix. It should be noted that a particular frequency on the EQ (say 440 Hz) corresponds directly to a musical note on the scale (in the case of 440 Hz, to the A above middle C - hence the expression A-440 tuning reference). Boosting the harmonic frequencies, on the other hand, boosts the instrument's tone qualities, and can therefore give it its own space in the mix.  Below are listed useful frequencies for several instruments:

  1. Voice: presence (5 kHz), sibilance (7.5 - 10 kHz), boominess (200 - 240 kHz), fullness (120 Hz)
  2. Electric Guitar: fullness (240 Hz), bite (2.5 kHz), air / sizzle (8 kHz)
  3. Bass Guitar: bottom (60 - 80 Hz), attack (700 - 1000 Hz), string noise (2.5 kHz)
  4. Snare Drum: fatness (240 Hz), crispness (5 kHz)
  5. Kick Drum: bottom (60 - 80 Hz), slap (4 kHz)
  6. Hi Hat & Cymbals: sizzle (7.5 - 10 kHz), clank (200 Hz)
  7. Toms: attack (5 kHz), fullness (120 - 240 Hz)
  8. Acoustic Guitar: harshness / bite (2 kHz), boominess (120 - 200 Hz), cut (7 - 10 kHz)




The thing to remember about EQ is not to get carried away be specific and use it only when you need it, where you need it.  If you get the mic placement correct and use good pre-amps on a good sounding instrument, you shouldn't need much.

The key to mixing audio is to make it sound exactly how you want it to sound and make the recording of the sound even better, you do this by adding stuff like a compressor to reduce the dynamic range so that nothing is too loud or too quiet but in a sound effect you might want something to start at a low volume and then increase, in this case you would not want a compressor but add in a fader or filter. It all depends on what kind of sound you are going for, another thing to use is noise gate if you want to eliminate any background noise in a sound below a certain threshold.

1 comment: