What would you buy to get a better sound? And, more importantly, how would you know it had any effect?
Thirty years ago, audio engineer David Clark stood up in front of his peers and described a method he thought would finally put paid to the mythology that permeated their industry. Today, although his method is still the one most specialists trust, audio mythology remains as strong as it ever was. Thanks to the Internet, some of the myths seem to have gained new believers, not least a vocal group who reckon Clark's methodology can't apply to something as subjective as audio appreciation.
Confirmation bias plagues the audio business. The brain is only too ready to accept that making a small change to an audio system – such as plugging in a new cable – results in a perceptible difference in performance.
James Johnston, consultant and former chief scientist at DTS, explained the problem at an Audio Myths panel at the Audio Engineering Society conference two years ago. He described an experiment he once ran: 'I put a big switch inside the box, but all it did was make a loud noise. It would just go 'clack-clack'. I labelled one side 'tube' and the other 'transistor'.'
As an additional prop for the experiment, Johnston rescued a valve amplifier from a bin: it worked just enough to make the valve glow. But that was all it had to do as it would never again be connected to any audio. 'When I ran the test, audiophiles almost unanimously liked the tube amp. Double-Es [electrical engineers] liked the transistor. One, who didn't have a preference, went up to the switch box, listened for a while then turned to me and said 'smartass'.'
The A/B/X approach developed by Clark was designed to correct the problem of the results meeting expectations when they should not. The concept is not very different from Johnston's setup. The subject sits in front of the machine and plays with three buttons – 'A' selects the first sound source; 'B' the second; and 'X' picks one at random. The first two buttons let the user become familiar with the sources. The 'X' applies the test: the subject has to work out which is which.
Ten years after Clark made his proposal, Stanley Lipshitz presented at the Audio Engineering Society's conference: 'It is my experience that, in the spotlight of blind comparative listening, many subtle audible differences are heard for what they are – rather subtle – whereas without such controls they often become exaggerated.
'Specialist audio cables, for example, have'become big business based on good advertising, good reviews and poor or non-existent science,' Lipshitz added.
Simply using an A/B/X console does not guarantee a fully blind test, however. In some experiments, the clunk that selected one source was louder: subjects suddenly became very good at telling the sources apart.
Although A/B/X tests need not have a time limit, one criticism levelled at them is that listeners are not necessarily equal. Familiarity counts for a lot.
Karlheinz Brandenburg of the Fraunhofer Institute pointed out at the Audio Engineering Society Convention in London this year that when, more than a century ago, audiences were first played a demonstration of a violinist recorded on a gramophone sitting behind a curtain, they could not tell the difference from a live performer.
'We adapt to what we hear,' Brandenburg said, pointing to the problems encountered by researchers working on binaural audio – used to simulate the sound of rooms and chambers on headphones. Binaural processing attempts to model the effect of the head and ears on the incoming sound using a head-related transfer function (HRTF).
'People who worked on HRTF found it was getting better not because their work was better but their brain was improving.'
Brandenburg says science is still incomplete in terms of its understanding of how the brain processes sound. 'It's basic research that's needed for the next decade.'
Myth or high fidelity?
The audio world is full of myths and many of them have persisted for decades. Some have a grain of truth at their hearts; others seem to be utterly bizarre when considered from the perspective of what we know about physics. Can you tell which is which?
Many speaker manufacturers have helped make biwiring look more useful in improving audio quality than evidence from listening tests suggests by providing separate wiring posts for the woofer and tweeter. In the 1990s, audio equipment reviewer Tom Nousaine conducted A/B/X tests on biwired systems against conventional setups and found that listeners could not tell the two apart reliably.
The problem with the idea that biwiring can have a significant effect is that all it does is add some extra cabling – it's'two low-resistance connections running in parallel. There is no electrical isolation introduced by biwiring – and electrons do not readily separate into those participating in high- and low-frequency signals simply because there is a choice of paths for them.
In the tests they ran in the mid-1990s, Jonathan Scott and Glenn Leembruggen of Australia-based audio company Electroustics concluded that any effect – most likely only a subtle change in impedance – is so marginal it should be masked by other problems in the overall system, especially the speakers themselves, which have long been known to be the least faithful components in any audio chain.
Although biwiring has not been comprehensively dismissed – if only because there has been little additional work on it since the late-1990s – there is very little independent evidence for it making any perceptible difference.
2 Digital is digital is digital – or is it?
Ten years after the introduction of the CD, one myth that soon appeared was the idea that manufacturing imperfections could result in differences in sound. The theory was that small changes in the lengths of the pits that encode data on a CD would ripple through to the output in the form of digital jitter.
Jitter is a known problem for digital electronics designers – it's the subtle shift in the timing of clock edges from the ideal. Every circuit suffers from it not least because the source of the clock signal, usually a quartz crystal, suffers from jitter itself.
However, as Ian Dennis and Julian Dunn of Prism Sound pointed out in the late 1990s, when they co-authored a paper with Doug Carson of DCA, the design of the CD player means that any notional jitter from the disc's pressing variances should not make it'to the output.
CD players buffer the data in memory before passing bits to the DAC, not least because the players perform error correction – the data on the disc is not raw audio samples but is redundantly coded to overcome errors introduced by dust and tiny scratches. The disc itself has no effect on jitter: the clock source is the crystal used for the output circuitry.
The team from DCA and Prism performed listening tests using custom-made'CDs but found no measurable difference between the discs, suggesting that if there is a cause it might be due to poor isolation between the drive and output circuitry.
In 1998, Eric Benjamin and Benjamin Gannon of Dolby Laboratories performed tests on jitter and found that it was far less problematic in practice than expected although cumulative jitter could, in principle, cause problems. That is why it is a concern in studios where lots of digital processors are chained in series. However, 'jitter' is still commonly blamed for all manner of hi-fi problems even if it is unlikely to be the cause.
There is some foundation for the idea that audio subsystems benefit from burn-in: leaving them to run for a while before actually using them. Parts with a mechanical function will gradually loosen up over time and may be too stiff for optimum performance when absolutely new. Tests have indeed demonstrated that the reproduction of speakers subtly changes after a period of use.
Where things begin to get odd is the idea that burn-in is needed for electronic equipment, or that burning-in headphones and speakers demands the use of special test tones rather than music or radio programmes.
One possible source for the burn-in myth for electronics is that the components themselves will have gone through burn-in tests after manufacture. But these are not to settle parametric performance so much as to weed out components that are likely to fail. Electronic devices tend to fail either very early in their life or towards the end.
When it comes to mechanical subsystems, the myth that a dedicated burn-in process is needed is tough to disprove – there is very little work on how speakers react early in their life to regular programme material versus test tones.
Conversely, there is no evidence that simply listening to speakers from when they are new hurts their performance in any way. In any case, your brain is more likely to adjust to the sound of the speakers more quickly than any mechanical change that occurs through usage.
4 All comparisons are good
Perhaps the über-myth of audio is that controlled tests don't matter. When you compare systems the differences will be so obvious to the sonic connoisseur that performing some kind of controlled blind test will be fruitless. The trouble is that the human brain is very susceptible to changes in volume: louder usually sounds better.
Experiments have shown that people can hear the difference between sources where one is just 0.2dB louder than the other – and a trained mixing engineer can probably do better than that. The effect is one reason why the A&R staff at record labels insist on making masters as loud as possible – a track that sounds louder on the radio will often fare better with audiences.
Less experienced engineers are often tripped up when they make small EQ changes to an instrument. The Fletcher-Munson effect means that the subjective impression of different frequency ranges changes with loudness – it's the reason why 'loudness' buttons on home hi-fis simply turn up the bass and treble. The danger for anyone modifying their home audio system is that they don't carefully balance the levels of the music they use to test the system. It's not good enough to set volume levels by eye as different subsystems can be calibrated differently. The output level used for testing has to be set by ear to the same level for any test to work reliably.
5 It all went downhill since the analogue days
Long before anyone complained about the quality of MP3s, people claimed the playback of CD-standard sources – 16bit'samples at a rate of 44.1kHz – was responsible for a harsh, shallow, sterile sound.
Blind comparisons with master tapes did not reveal any appreciable difference. To engineer Arny Krueger, who helped develop one of the first off-the-shelf A/B/X testing machines, there is a good reason for this: tape has a signal-to-noise ratio of only 13 bits. For vinyl, it drops to approximately 11 bits – and that's for a high-quality pressing.
Claims persist that 44.1kHz is not enough,'but it is hard to justify using 96kHz or'192kHz audio when you consider that most adults cannot hear as far as 16kHz. One possibility is that the auditory system is not quite like an A/D converter. Even though the brain cannot hear ultrasonic frequencies it has evolved to spatially locate sounds very well using subtle temporal cues such as the relative delay of a sound as it hits each ear in turn.
In the 1970s, Karolinska Institute-based auditory-system researcher Jan Nordmark claimed differences down to the microsecond level in arrival time provided clues to help localise the sources of sound in space.
A group from the Boston Audio Society focused on the overall impression of high sample-rate music by interposing a CD-quality loop on one side of an A/B/X test. Subjects failed to consistently find a difference.
But comparisons made by Amandine Pras and Catherine Guastavino at McGill University found some people could discriminate between recordings made of instruments at 44.1kHz and 88.2kHz, but only for some orchestral recordings.
The Boston Audio Group members did notice during their study that SACD and DVD-A versions of recording often sounded subjectively better than their CD equivalents. They talked to some of the engineers that took part in the making of the CDs and found as buyers of SACD were more likely to favour audio fidelity, the discs did not exhibit the same dynamic-range compression that is routinely applied to standard CDs. *
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.