-
Mastering process question
Geez, it’s been quiet around here lately. I’ll throw up a question that plagued me last night. I’m mastering a song for my son’s band and ran across an issue to figure out.
We have a 48K, 24 bit, wav file at -8LUFS with a -1dB True Peak value. When I opened up the mastered file in RX11 I could see a bunch of energy between10 and 20 Hz that I couldn’t hear in headphones. That’s wasted energy even if you’ve got subs. I figured if I attenuated that energy I might could squeeze some more loudness level. The intended result of applying multiple passes of fading out below 20Hz was successful in lowering the energy in the basement. It looked better.
The unintended result was surprising, they usually are. Like attempting to force lower oil prices by invading and nationalizing an oil rich country’s resources but causing a global realignment of power, backward thinking leaders to chaos, and quickly World War 3.
Fortunately, the unintended consequence I experienced was less fatal. At some point in the multiple passes of EQ ing out that low, low end, my LUFS level changed. How can it measure louder if I’m consciously taking away some of the level? The Google machine’s response was that my low end EQ created a phase response change leading to an asymmetric waveform that reads higher.
If I were to refine my question down to is core, it becomes this… Would using a flat-phase brick-wall EQ below 20 Hz allow me to accomplish my goal of minimizing that unusable energy but NOT induce a phase change nor an asymmetric waveform and therefore avoid being penalized by the LUFS police?
New year, new questions.
-PT
Log in to reply.
