We have been meeting with a lot of candidates lately, both for our internship program as well as to bulk up our freelance roster. In addition to sitting down for a chat or looking over resumes, Kate and I are reviewing a lot of work. Whether editors are aware of it or not, the work in these sessions speaks a lot to their experience level. I've written previously about how to properly present your work with the mixing endgame in mind. However, I haven't yet touched on a topic that time and again seems to need further discussion; how to properly cut backgrounds. Not so much on a technical level (when it comes to how we like to see backgrounds cut, Jessey Drake has already created a great practical guide right here on this blog). It's more an issue of what constitutes a background, an ambience or simply another sound effect. It seems like such a simple thing, but being able to distinguish these from one another and thus properly laying out these sounds seems to be the dividing line between experience and novice. Here are some tips on how to be sure your backgrounds are an asset rather than a liability.
Viewing entries in
Learn About Sound
Elastic audio. The myth, the hidden tool and treasure.
As you might have read in our previous blog post How Do Ears Work?, our brains use our ears to derive sounds from detected frequencies. These frequencies are natural occurring vibrations that enter our ears where they are then processed into what we perceive as sounds. But what exactly are these frequencies? And how do they work?
A few weeks ago, we wrote a blog post about how the human ear works, and that inspired me to dive deeper into the section about the brain; specifically, psychoacoustics. The study of psychoacoustics, as defined by Merriam-Webster, is “a branch of science dealing with the perception of sound, and the sensations produced by sounds.” Essentially, psychoacoustics is how your brain perceives sound, and if used correctly, it can be an incredibly powerful tool in a sound designer’s arsenal.
Sound is an essential part of all of our lives. It allows us to communicate with others via speech, it helps us to sense imminent danger, and it affords us the enjoyment and entertainment of music. But, how does sound make its way from vibrations in the air to our own auditory perception which we can easily identify and translate? Our bodies are miracles of science, and the answer to that question is fascinating.
At Boom Box Post, we are always doing our best to meet new content creators who are just beginning their professional journey. Not only are their projects incredibly fun and inventive, but we often get to walk them through the process of post-production sound for the first time. For even the most seasoned artists, writers, or producers, this can be daunting territory the first time around.
The following is a primer designed to introduce new content creators to post-production sound. It's an incredibly fun process and the final step in creative story storytelling before your content reaches viewers.
The study of the interaction between how our ears and brain respond to sound is called psychoacoustics or sound perception. As audience members, we can perceive a sound as being a pleasing experience or not and anywhere in between. But, this perception isn't formed merely by using our ears. The connections between our ears, brain, and nervous system let us feel the effects of sound with our entire body. This concept of physically hearing and psychologically perceiving sound helps to connect us to the television show, movie, or video game we might be enjoying.
We were so excited to give a talk at this year's Creative Talent Network Animation Expo in Burbank. The talk started with a brief history of sound for animation (a lot of which you can find expertly boiled down here) followed by an overview of the post sound process from beginning to end. We finished up with some video demos of the different layers of sound in our work as well as some of the fun instruments and props we have recorded over the years.
We hoped the panel would prove interesting to content creators looking for information on how to approach the sound process for their own work. To our pleasant surprise (this was our first time doing this after all) the turnout was incredible! The room was filled to capacity and we were bombarded with fantastic questions from a very energetic crowd.
In the 1920’s and 1930’s, recording equipment was extremely large and heavy, rendering it impossible to take outside of the studio. Unable to record sound effects in the real world, the studios were forced to invent new approaches to creating sound for their animated content. Thus, two different approaches to sound effects were quickly developed.
With the recently released Star Wars: The Force Awakens trailer smashing existing viewing records, and crashing sites like Fandango due to a rush for pre-sale tickets, it is no secret that the hype is strong with this one. On December 18th of this year, hoards of people will be heading to the theaters to witness the newest addition to the Star Wars universe.
Diehard fans know there is a lot to look forward to, but there is a new addition to the Star Wars universe that is easily overlooked: Dolby Atmos. Most theaters still show films in 5.1, but with Atmos becoming increasingly popular as part of a premium film experience, it is worth noting how far technology has come since the first Star Wars film in 1977. Therefore, I would like to focus this week’s blog post on the evolution of mixing formats and how they impact the audience experience.
 
                
               
                          
                        








