Tone Deaf Audiophiles Defend Crazy-Expensive Ethernet Cables, And They're Still Wrong
The audio industry is filled to the brim with magical devices that are capable of reshaping the reality of those that dare tread within its realm. There is often utter bewilderment at the expressions some people use to converse with each other on the subject. What is the color of sound, 'clarity', 'warmth', 'coarseness'? These are also the same people that will take out a mortgage to buy a sound system, and even spend $10,000 on a single cable - yes, we're back to this chestnut again. (Update, 3/30/15:
Pricing has been falling dramatically for these cables since the time the original article was posted. They are still listed, however, for thousands of dollars in 5 - 10 ft lengths) (Update, 3/30/15 -10:33PM: We've been informed that in fact pricing has not changed on these cables and you can in fact find longer lengths of up to 12m for $7995, though as noted, shorter lengths can be had for less.)
It's come to our attention that some people still don't understand why such a cable is incapable of making a distinguishable difference (or any difference for that matter) over a standard $2 Ethernet cable. So, we're here to inject a little dose of reality into the situation. Join us as we not only disprove the bias, but also provide an educational read into why.
First of all, we need to bring to light a certain article that attempts to justify the use of an audiophile-grade Ethernet cable by using anecdotal evidence and measurements without regard to other factors. The article is a classic case of misunderstanding. It starts off by complaining about switch mode power supplies causing problems with RCA cables - well, yes, that can happen. Why? RCA cables are analog and provide a direct path to the audio pipeline (DACs and Amps). They are streaming AC signals, in the audible range, not digital data.
$10,000 Ethernet 'audio' cable (Now Only $2,194.75 for 10 Feet)
But hold on a second; how can a switch mode power supply introduce audible noise? Are they not set to a high frequency, above the audible range? Yes, they are, but modern electronics introduced us to the diode and transistor, which are very capable antennas for certain radio frequencies. Ever listen to a crystal radio as a kid? All that's in the circuit is an earpiece and a diode. A radio signal that passes through the diode is capable of generating audible sound; pass that through an amplifier and suddenly your speakers jump into life when a mobile phone is about to ring.
Radio emissions can come from anywhere, at all kinds of frequencies, so we try and isolate them as much as possible, in as many places as possible, especially when it comes to audio (damn those pesky diodes). That begs the question, how can transferring digital data over an Ethernet cable, something that is completely separate from an audio pipeline, introduce audible noise? To answer this question, we need to do some explaining.
What Are Ethernet Cables, Exactly?
Ethernet cables are electrical conduits for differential signals. The twisted pairs use each paired wire as their reference (rather than a fixed ground) so that when an external source induces a voltage spike across the line, it affects both conductors equally. This means that both will have an increased voltage, but because they are referenced to each other and affected equally, the induced voltage spike can be nulled.
No spike: line 1 = 0v above ground, line 2 = 2v above ground, difference is 2v
With spike: line 1 = 5v above ground, line 2 = 7v above ground, difference is still 2v
The pairs of cable are twisted to reduce crosstalk (inducing a signal on another line) between adjacent lines. When there are many different pairs within the same cable, each pair is twisted at a different rate, further reducing crosstalk.
Both the twisted pair and differential signaling protect the Ethernet cable from mild EMI (Electromagnetic Interference) emissions, such as that from AC power cables, small transformers, motors, etc. But what about radio waves?
Ethernet cables are sending binary signals at a very high speed. That in effect creates a square-wave AC-like signal across the pair, and the frequency is typically within the MHz range, which itself is in the radio frequency spectrum - so it's possible for external radio sources to interfere. Filtering out these unwanted signals (noise) is a bit tricky since there is no single method for removing it across a broad frequency range. This is why we use multiple filtering types from foiled cables to ferrite rings, chokes, snubbers, and so forth. At some point, something will get through and disrupt the signal. Under these circumstances, data corruption is possible. That's why we have error correction.
All of this means that digital data, when sent over an Ethernet cable, is susceptible to external interference, but we find ways of countering it. When the countermeasures kick in, what happens to the signal? With TCP, it detects the fault and requests a retransmit. If things get really bad, then it will flood retransmits, effectively cutting the bandwidth down to a crawl, which as we know from the Internet, results in buffering. The worst that will happen with audio in this case is that it'll stop playing while it buffers. But in order for that to happen, you would need massive amounts of interference to bring a 1Gb home Ethernet connection to <320Kbps required to stream audio - sorry, 4.4Mbps (can't forget about 24/96 uncompressed audio!)
The most ridiculous of high-end audio (24-bit/192Hz) still taps just 10% of 1Gbps bandwidth
However, this is assuming TCP. What if the network is streaming rather than transferring, in this case, using UDP? What happens to the signal then? Well, things go missing. Ever had a corrupt MP3 which pops and squeals at random? That's the result of bits missing, which can happen when using UDP, as there is no guaranteed delivery. With movie streams, the codecs are pretty robust and it takes a large chunk of data to cause visible corruption, such as macro-blocking multicolored artifacts. Over a home network, streaming over UDP is not common, but can cause very audible artifacts.
Under those circumstances, bad Ethernet is the least of your problems. Why would anyone want to stream time-critical data like audio (especially audiophile audio) over UDP? Switch to a decent software stack that uses TCP and the problem is solved.
Now, the article linked at the start does concede that digital data transferred over an Ethernet cable won't be affected, as just explained, due to error correction. However, it then says that EMI/RFI can "hitch a ride on unshielded Ethernet cables and inject sonically harmful noise directly into the audio path".
As far as I am aware, Network Interface Cards (NICs) do not process sound. In fact, they have nothing to do with sound. Once data reaches the NIC, it is passed via an interconnect to the CPU, to memory, back to the CPU, processed, passed through a PCI, PCIe, or USB interconnect, where finally it reaches your soundcard (in digital form still!) and is converted by a DAC into an analog signal, which is then amplified to be heard by headphones or speakers. There is a hell of a lot of 'stuff' between an Ethernet cable and a speaker. So what on earth could cause audible interference from a network?
Some audiophile gear holds real value, and some preys on your naivety.
If the Ethernet cables are transferring analog signals instead, then sure, interference can be a problem. But who would use Ethernet cables as analog conductors for audio? Wait, never mind... audiophiles. But if we're still talking digital here, then surely something must be at fault, right?
This is so implausible, it's barely worth thinking about. In a traditional audio file server NAS or PC, there are so many direct or indirect layers of filtering going on, it's almost impossible. Ethernet makes use of transformer coupling to eliminate problems with different ground levels, so ground based interference is out. There are multiple digital signal processors involved in the transition of digital audio to speakers, each one of them will have some kind of common-mode noise rejection on both signal and power. Even the final stage, the op-amp, will have common-mode rejection on the power line, so interference over power is out. In order for products to pass EMI/RFI regulations, the device itself can not emit interference above certain levels, and if it does, it's not allowed to be sold. As such, there are likely hundreds of ferrite beads, cores and other suppression elements in the signal path to eliminate any stray EMI.
Even 'audiophile' grade soundcards have so much isolation, thick aluminium plates, faraday cages, filtering, audio grade capacitors, and so on, that surely, somewhere, something, will kill that last little bit of EMI.
This basically means there are two possibilities for Ethernet-induced noise affecting audio. Something is broken/faulty, or something has been badly designed (be it the product or the installation.)
Of course, there is a third possibility. Can you say, "Placebo?"
Story contribution by guest editor, Jamie Fletcher.