Research Data Suggests Higher Music Fidelity Little More Than Snake Oil

The audio industry has a tendency to make utterly ridiculous claims surrounding its products, as evidenced by the existence of $550 RCA cables and an $80 iPod cable guaranteed to leave your rig "exploding with sound." Despite this, the idea of using higher-quality sampling to produce better audio files seems like a no-brainer. If 16-bit/44KHz is good (that's standard CD quality), surely 24-bit / 192KHz is better, right?


Morrow Audio's guarantees to make Monster Cable look cheap and sensible by comparison

As it turns out, no. The full explanation of why is rather lengthy but surprisingly easy to understand for such a complex topic. The simplified explanation for why 24-bit / 192kHz is a bad idea is twofold: First, we literally can't hear it. The human ear's maximum range stretches from 20Hz to 20kHz. 192kHz is 6x higher than the maximum frequency anyone can hear. To put that in perspective, bats -- famous for inspiring men to run around in tights and for their echo location -- top out at 110kHz. Porpoises manage 150kHz, though it's possible that they just get bored playing stupid human games and wander off in search of fish.

The other problem with 24b/192kHz is that it's possible for ultrasonic signals to muck with the audible stream by introducing distortion. The CD standard of 16-bit/44kHz accurately covers the range of human hearing and, as the author notes, "will always be enough."

This is the part where audiophiles tend to get really cranky.

Aural Accuracy

One major double-blind study asked listeners to distinguish between an ultra-high quality DVD-A / SACD recording and the same content played back at 16-bit/44kHz (standard CD quality). The researchers used multiple high-end equipment setups in noise-isolated study environments and recruited both professional listeners and amateurs to participate. In 554 trials, listeners chose correctly 49.8% of the time -- precisely what you'd expect if they'd been guessing. The professional listeners showed slightly higher accuracy and were correct 52.7% of the time.


So why do so many audiophiles swear by high-end equipment and/or claim that there's a difference between 24-bit/192kHz and 16-bit/44kHz? Part of the answer boils down to confirmation bias. Humans aren't objective, as much as we like to think we are; the brain subconsciously favors results and weighs evidence more heavily when it confirms things we already believe. This tendency explains part of why die-hard conspiracy theorists are so resistant to reality -- for truthers, birthers, and climate change denialists, any scrap of information is "evidence," no matter how much they have to torture it to make it fit the model. Go out and drop $5,000 on an audio system and your brain wants it to be better -- it's the only way to validate the purchase.

Objective factors include the use of different master copies, subtle differences in body position and posture that change exactly what each ear is hearing, the version and type of audio codec used to create the file, and tiny variances in playback volume. Research has shown that humans overwhelmingly rate louder music as sounding "better," which is why virtually all CDs are oversampled to pump up the loudest portions. The human ear is capable of picking up on variations as small at 0.2dB -- far smaller than the notches on a volume knob. Other factors -- like the distortion introduced by ultrasonic frequencies sampled at 192kHz -- can be audible, even if the ultrasonics themselves never are.

The upshot of all this is that 24-bit/192kHz audio sampling is useful when it comes to mixing and mastering music, but worthless in the real world. The most cost-effective way to improve your audio experience is to invest in a good pair of headphones, even if you stick with garden-variety 128-bit MP3s. For more details on the science behind this discussion, we highly recommend reading the original article.