A few nights ago, I was struck by an idea that hadn’t occurred to me previously. I was prepping for a presentation on loudness metering at a game development studio that would take place the following day. It was a question…
Would bit rate or sample rate reduction affect the loudness measurement of sounds metered using ITU-R BS.1770?
Both are practices common to game audio. If there actually is a potential difference, it would be important for people to be aware of that. Never blindly trust your tools. We use metering systems, because it is unwise to rely only on our ears. Likewise, trusting a metering system in a situation it may not have been designed for is equally foolish. With everyone constantly pushing for higher quality and higher resolution audio, I doubt there was an abundance of concern during the development of ITU-R BS.1770 for possible applications in lower resolutions.
I did some quick tests with the forklift recordings from the digital mics review. I took one file output at 24/48, and converted it to a number of different lower resolution configurations. I used Audacity (a great tool for conversion to custom configurations) to generate .wav files with the following bit and sample rates: 24/32, 24/16, 16/48, 16/32, 16/16, 8/48, 8/32, and 8/16. All of the files were brought back into a 24/48 session of Pro Tools and fed to a Dolby LM100 for metering. As far as the meter was concerned, changes in loudness were minimal. Each clip, regardless of bit and sample rate, measured at -23 LUFS (per clip “integrated,” with no anchor). I bothered to pull up the more precise log, and found that measurements remained within +/- 0.3 LU of the 24/48 measurement. That’s an arguably inconsequential difference. However, I don’t feel that the measurement was 100% accurate when it came to the perceptual differences.
Remember that, while it is good, the equalization curve of BS.1770 is not perfect. To be perfect, it would have to far more complex than the simple two-stage filter…not to mention it would also have to be customizable for each individual listener. It’s designed to apply well over a broad statistical sample base.
Overall, I found bit rate reduction to have little effect on the perceptual loudness of the sound (aside from things getting a bit “noisier” at lower rates):
Things change a bit with spectral content. If you listened to the normalization examples that were posted as part of the loudness webinar we hosted earlier this month, you may have noticed that the BS.1770 normalized files were not perfectly matched. They were perceptually closer than those normalized using other metering methods, but not perfect. The same applies here with sample rate reduction. The files came out close, but those with more content across the spectrum do seem a bit louder. It becomes most noticeable once you start chopping off everything above 10kHz. Have a listen:
The 48 and 32 kHz examples sound nearly equivalent. It’s a different story once we drop down to a 16 kHz sample rate. Overall volume is similar, but there’s a noticeable difference without the higher spectrum content. This could have an effect on your overall mix. So to those of you in game audio, keep this in mind…particularly if you plan to loudness normalize assets prior to implementation. It would probably only be a minor surprise, but it’s an annoyance that can be eliminated with some quick testing.
It’s always better to have an informed workflow, right?