Archive for category Articles

Speaker Cable Thickness Does Not Matter

Go to any store, ask any bunch of posters in a forum, read any hifi article off the web and they’ll tell you that the thicker the speaker cable, the better, because it has “lower resistance”.

In the 60s and 70s, when hifi cables were commonly made of wire too thin to be considered speaker cable, there was certainly some truth to this.

These days, even the bare minimum cable is thicker than 20 AWG, and believe it or not, speaker cable thickness makes almost NO difference to sound quantity, and absolutely ZERO difference to sound quality.

Why?

1. Wattage does not matter

Let’s take a 20AWG cable, a microbe in speaker cable terms these days. A typical copper 20AWG cable has a resistance of approx 1 ohm per 100 foot length (a fairly long standard use length in domestic hifi).

Resistance per 1000 feet

Typical resistance per 1000 feet

A 1 ohm resistance in an 8 ohm system drops power by 11%. A 1 ohm resistance in a 4 ohm system drops power by 20%.

Sounds like heaps? Well, considering that this is possibly the worst case senario, no. A 100 foot long run through 20AWG cable, using 4 ohm speakers, will give you just 20% power loss.  What’s 20% power loss in real terms? Well, of course, you already know that wattage does not matter, right? And how little a % change in power affects sound levels? Even halving the power reduces the sound level by a mere 3dB, or 3% on typical 1 to 100 log-scaled volumn controllers.

And we’re really talking worst case scenario. If you’re using 17 AWG cable into a 15 foot run driving 8 ohm speakers (more typical application), you’ll end up with so little loss that it’s not even worth the decimal places.

2. Resistance causes zero loss in sound quality

This is far more important than the first point. Power and power delivery is easy to control. You can always turn up the volumn, buy a more powerful amplifier, buy more sensitive speakers. But what’s the important factor that “beginner” enthusiasts miss?

The fact that sound quality is not affected by resistance.

This is the reason a 100W amplifier could cost $200 and another $200,000. Why are high end amplifiers not all 1000W? They’re made for accuracy, not power.

Similarly, a good speaker cable has qualities which colour the sound much less than a poor one.

Everyone who’s taken high school science knows a little about resistance. V=IR, P=IV, etc. What you don’t realise is that in the real world, there is a whole other dimension of force that affects sound quality more dramatically than anything else. It’s called “reactance”.

Reactance and the behavior of reactance is the real dark force in what determines a quality speaker cable. A cable with low resistance can have high reactance, and as a result, not only be detrimental to sound, but add to the impedance of the wire. What’s impedance? We’ll get to that later. Only reactance colours the sound, and reactance only happens in high frequency (higher than the zero hz used to measure resistance). Resistance only has an effect on power. It simply makes everything a smaller version of itself. Reactance, on the other hand, can convert certain frequencies into others. Reactance introduces more frequencies, ones that were never in the original signal. Reactance behaviour is also relatively unpredictable when taking into account the internal behaviors of connected equipment.

3. Impedance is much more important than resistance

If you notice carefully, point 1 had calculations that never even considered reactance (much like the rest of the world).  But reactance has an equal affect as resistance does on power delivery. A speaker cable with near zero resistance could, in theory, be substantially detrimental to power output due to reactance.

Impedance is the combined reactance and resistance of the cable. Impedance is the “real” resistance when an AC signal is passed through the cable, and because measurements for speaker cables are only ever done in DC, you never know what the impedance might be.

A speaker cable with tiny resistance can, if designed improperly or used incorrectly, be more detrimental to both power AND sound quality than one of much higher resistance. And don’t think that any cables are immune to this. I’ve seen even “high end” cables being passed as quality cables when they’ve clearly not been made with impedance reduction in mind.

Conclusion

I admit, the industry is full of people selling ghosts as gold and chalk for cheese. But don’t think that there’s no truth to “quality speaker cable”. A speaker cable isn’t better simply because it’s fatter. There are thousands of reasons why certain designs can, and do, make a better cable. And this is just the tip of the iceberg in the weird and real world of analogue electronics.

  • Share/Bookmark

1 Comment

Do Interconnect and Speaker Cables Make a Difference?

Many people ask me this:

Are X cables better than Y cables? Or worse, what do you think of X branded cables compared with Y brand?

These are completely pointless questions.

It’s a little deceptive to compare cables, even specific models of cables. Cables tend to be hugely reactive to the equipment to which they’re connected. The reason cables impart changes to sound is due to their impedance characteristics, but much more importantly, on how their impedance aligns with the input/output impedances of the equipment they’re connected to.

Ever heard of “impedance matching”? This is a term used more in high frequency industries than audio (radio, comms, etc.). When input/output impedances of connections are mismatched with that of the cable, funny things can happen. Reflections can occur to the point where it easily creates enough distortion to render the signal useless, even at very short lengths. This is why coax cables in come 75ohm flavours. Now, you don’t quite need to impedance match audio cables, but such is the sort of behavior that can happen, albeit to a tiny amount, which may affect the way and extent to which cables can affect sound.

Comparing cables in this way is like comparing the babies of two equally intelligent and healthy set of parents. Which will be better? Who knows. It depends on how they grow up and the environment in which they do so.

Finally, cable technology hasn’t had any significant improvements in the last 70 or so years. Cables are some of the most simple and fundamental components in the electronic system. No cable manufacturer could possibly put their cable’s superiority down to design. Material and quality differences are perfectly valid, but I find it hard to accept when ripoff companies charge thousands of dollars for what is essentially a fancier looking piece of copper in some more colorful plastic.

Don’t fall for it.

  • Share/Bookmark

No Comments

Do quality HDMI cables make a difference?

After Ed’s post on whether digital is perfect, we received an impressive response from readers who voiced their own opinions about the subject. As requested by many readers, I’ve decided to put this theory into practice and explore the technology behind HDMI and the quality of HDMI cables.

DISCLAIMER: As a real engineer could understand, I cannot possibly fit into a blog article the holistic theory behind digital data transmission. Some bits are over simplified to allow a layperson to understand technical concepts easily.


Can errors exist in HDMI?

The first question we need to answer in this quest for truth is this: can errors exist in HDMI data transmission? and if so, what is the likelihood of this occuring? To demonstrate this, we’ll need to take a step back and have a look at basic digital transmission theory. Much of this is explained in more detail in the digital lies post, but we’ll look over the essential ones again.

Digital Transmission Errors

This is how most people think digital transmission works:

Signal transmitted:

1, 0, 1

Signal received is either:

1, 0, 1

or

A random set of numbers (or not at all).

In reality, this is not true.

Firstly, signals aren’t just transmitted as 1s and 0s. 1s and 0s are the symbols we’re transmitting. 1s and 0s don’t travel down copper wire. However, electrical current does, and hence we represent the 1s and 0s with, say, voltage.

This is what a transmitted signal (and the corresponding bit it represents) looks like:

digital_signal

Digital Signal

Unfortunately, even the best cable in the world isn’t perfect. Chances are, there will be noise to some extent, and a good received signal may look like this:

A Good Received Signal

A Good Received Signal

The received signal is in red, as compared with the original in grey. Notice how noise will have an effect on the signal received. (Interesting fact: we also note at this point that fundamentally everything is transmitted and received in analogue, because in the real world there is no such thing as digital. Digital is just a syntax of communication. Analogue is the medium.)

Signal Decision

Signal Decision

In this case it’s fairly easy to decode the original signal. We can simply take a sample of the received signal in the middle of every bit and check if it’s higher or lower than the middle. We take a “peek” at what the signal level is at each of those vertical black lines. We compare this level with the middle, and make a decision as to whether it is above or below. As we can see above, there would be no errors in this case.

Unfortunately, cables are much less than perfect. As technology improves (HDTV, DVD, Blu-Ray, etc.), more data is required per time period. I.e. higher bandwidth. This means that we need to send bits faster.

Notice how there is an instant jump in the signal from 0 to 1, but there is a slight delay in jumping from 0 to 1 in the received signal. This is because cables don’t have infinite bandwidth. Now imagine the same bits being sent, but at 1/1000th of the speed of above, and with a cable that is slightly worse than above. What would the received bits look like?

Poor Signal

Poor Signal

This may be something similar to what’s received. The signal exhibits much noise and distortion. And normally (if this were analogue), it would be nearly unusable. But because of the digital scheme of transmission, we can still decipher the signal without error (using the above method of sampling/decision making).

But what happens if a stray bit of electromagnetic radiation hit the cable at the 3rd bit? Whoops, we just got an error.

Signal With Error

Signal With Error

The received bit would now be deciphered as 1, 0, 1, 1, 0, 1, 1.

EM radiation, power supplies, adjacent cables, even the earth’s magnetic field produces noise. And if a cable is poorly built, we can see even more errors being received.

So now we’ve established how errors can exist in digital transmission, let’s see what happens if it occurs in HDMI.

Digital Errors in HDMI

Fortunately, there are mechanisms outside of the transmission itself to reduce the impact of errors. In fact, when a bit error does occur, you will most likely not notice it happening. The effects of bit errors can be significantly reduced through digital coding mechanisms, as well as parity bits and error checking mechanisms. I won’t repeat on how this works in detail here (see digital lies post for details), however essentially the mechanism will detect when a bit error occurs, and when this happens, it will substitute a signal which it guesses (usually closely) to what it probably was. These processes are called channel coding and interpolation.

“It’s digital, it’s either perfect or nothing at all”

This is thus, a blatant lie from those who do not understand fundamentals of digital transmission.

Do HDMI cables exhibit errors?

So now that we’ve established that it could happen, let’s have a look at whether it really would in real life. Remember from the above, a few bit errors in the millions of bits transmitted per second would not be obvious to observer (by obvious, I mean “mosaic” style errors in pictures, clicks and pops in sound, etc.).

To achieve obvious errors in HDMI, the signal must be distorted to such an extent that multiple bits per bit words are received in error (to render parity bits useless). I.e., it will need to have a significant bit error rate for an extended period of time. Does this happen in real life?

Cable length

Cable length restrictions is strong argument that HDMI can and probably exhibits bit errors in real life. Most 2m cables perform satisfactorily. However, extend these cables to 5m+, and things start to go pear shaped. Obvious errors start occuring. Yes, these are the multi-bit-per-word errors that can cause clicks and pops in sound, and little squares in the picture. Now I’m not suggesting this is proof, but from my experience in digital transmission, if an increase from 2m to 5m can introduce obvious errors in transmission, it’s a strong argument that smaller errors occur quite frequently.

Hell, even the HDMI parent organisation has a standard of 10^9 BER (bit error rate). I.e. one error per 1 billion bits. At HDMI’s frequency, that’s one bit error every 6 seconds.

HDMI is a one way protocol

“If HDMI cables can make a difference, why is there no high end ethernet cable?”

Because HDMI is a one way protocol. That means the data travels in one direction, and there is no response from the receiver. Therefore, the source has no way of resending data if it is corrupted. TCP/IP works by breaking data into packets, and resending packets if received corrupted. However to resend data, the sender must know that the original was received corrupted, and for this to occur, the receiver must be able to talk back to the sender. This can’t happen in HDMI.

Manufacturing standards & material efficiency

The art of engineering is not to achieve perfection. It is, rather to achieve efficiency. To approach perfection requires an exponential increase in resources. However a good engineer would simply approach it as close as possible with what’s available.

A cable manufacturer’s goal is to sell as many cables as possible. To do this, they must have the lowest price possible. To get the lowest price, they will save on as much material as they can. How much material can they save? What controls are in place to ensure that this happens? Well, nothing, actually. There is no enforced worldwide standard of HDMI cable making. Although HDMI is licensed, there is no real control mechanism for the standard of manufacture. Cable makers are fairly free to do whatever they like.

As a result it’s possible that there may be cables out there that don’t even meet the standard. There can be cables which introduce hundreds of errors every second, but the consumer is none the wiser. (This is partly an advantage of HDMI). However, by no means is HDMI perfect.

Practical Considerations

The point of this article isn’t to say that a $40 cable is better than a $10 cable. It’s not even to say that people would care or notice the difference, or that when small errors occur whether the difference is noticeable. It might very well be impossible to notice them in a side by side comparison.

The point I’m simply trying to make is that it is not impossible for there to be a difference, and that people shouldn’t believe whatever rubbish that gets posted all over forums. Everyday, people all over the internet talk about digital signals as though they’re experienced cable designers. I’m not a cable designer. I’m not even a real expert in HDMI technology. Hell, what I’ve just said might be complete rubbish as well. But why take it as gospel without at least doing some research? Google a few terms that I’ve mentioned. Look it up on wikipedia for 5 minutes.

Digital is far from perfect. Errors happen all the time, in every cable, in almost every instance. Should you spend $200 on a high end HDMI cable? Probably not. Once a digital cable reaches a certain quality (negligible error rates), it is nearly impossible to improve on it. However, I wouldn’t be at all surprised if a $5 cable made in China isn’t made to specification. And although it “works”, it doesn’t mean it’s error free.

Conclusion

There are people who may be happy with the cheapest cable on the net, much like there are people who are happy to listen to mp3. For me, personally, I’d fork out the extra $20 to buy a reasonably good set of cables to hook up my TV, knowing that my $4000 TV is getting its full use. Sure, the $5 version might be just as good. But for the extra $20, I’ll think of it as insurance.


Follow up

The following is taken from the FAQ section of HDMI.org:

“… It is not only the cable that factors into how long a cable can successfully carry an HDMI signal, the receiver chip inside the TV or projector also plays a major factor. Receiver chips that include a feature called “cable equalization” are able to compensate for weaker signals thereby extending the potential length of any cable that is used with that device.

With any long run of an HDMI cable, quality manufactured cables can play a significant role in successfully running HDMI over such longer distances.

As you can see, the performance of HDMI cables goes far beyond the simple “if it works, it works” statement.

… there may be instances where cables bearing the HDMI logo are available but have not been properly tested. … We recommend that consumers buy their cables from a reputable source and a company that is trusted.

I wouldn’t be surprised if the majority of the cheapies on ebay aren’t certified, or have dodgy certification. Hey, 99% of them might work great. But there’s also a chance that a large majority of them don’t.

  • Share/Bookmark

4 Comments

“High End” Hi-Fi Jargon Explained

DISCLAIMER:

We are by no means holding an opinion on whether any of these actually work. Some definitely do, some definitely don’t, some are untested and we don’t actually have an opinion. However their root “technology” theory is explained.

This is certainly not an exhaustive list. It’s not even an extensive list. We’ll be continuing to add to this post as we think of more.

Please feel free to contact us if you can think of more to add to the list!


Bi-Wiring

Bi-Wiring speakers is the practice of using two sets of cables for each of the speaker drivers (tweeter and woofer). As explained in the diagrams below.

Conventional Wiring

Conventional Wiring

Bi-Wiring

Bi-Wiring

As demonstrated above, conventional wiring uses a single run of speaker cable to both drivers. Most speakers come with multiple wiring posts to allow for bi-wiring/bi-amping, however they are usually shipped with a bridging bracket which connects them together for conventional wiring. This is removed for bi-wiring purposes.

The argument for bi-wiring is that firstly, the cables are effectively doubled up. Additionally, that the bass signal will interfere less with the treble signal as the cable runs are separate (cables can also be made differently to “suit” different frequency ranges).

Argument against bi-wiring is that it actually creates problems as the reactive properties of the two channels will be further differentiated, causing increased misalignment in phase.

Bi-Amping

The Bi-Amped configuration uses two amplifiers per speaker, as demonstrated below and following on from above.

Bi-Amped

Bi-Amped

This means that the power available to the speaker is increased, and hence there is less work to be done for each amplifier than if a single amplifier was to be used. The benefit here is not in the cables, but in the additional amplification power.

Single Crystal Copper (SCC)

As copper cools, it does not form into one continous block. It will usually form into a block of conjoined crystals on a microscopic level. These crystals have varying properties and thus may conduct current differently. Additionally any gaps between the crystals may exhibit unwanted electrical properties.

Single crystal copper is made in such a way that a piece of copper may only have a few crystals per meter. This means that they’re more homogeneous and thus are a purer conductor.

SCC is not manufactured widely and hence they tend to be rather expensive.

Smooth Surface Copper (SSC)

The smooth surface copper design argues that although a strand of copper wire may have reasonably similar diameter across a long length, since the surface of standard drawn copper cores may be quite rough, the diameter of the cable effectively changes across a length at the microscopic level. Changing diameter means changing properties and hence will add unwanted characteristics to the cable. Additionally, since the majority of the current travels on the surface of the conductor, the detrimental affects to conductivity is amplified.

Smooth surface copper is made in a way that ensures that the surface is smooth throughout the length, reducing differences in behavior in each part of the cable.

Ultra High Purity Copper (UHPC or variant)

This one comes in a number of different names, essentially meaning that it’s purer than regular oxygen free copper (OFC) which tends to be around 99.5% pure.

Teflon Insulation

Teflon is not just slippery, it is an excellent insulator. Teflon can be used as insulation in a cable. The increased di-electric strength (see below) improves conductivity and reduces cable reactance.

Dielectrics

A dielectric is something that does NOT conduct electricity, but is a good conductor of magnetic flux (magnetism). Dielectrics are necessary not only to keep wires from shorting (as insulation), but also as conductors of magnetism.  As any good physics student would know, as a current is applied through any conductor, it will exhibit a magnetic flux around the conducting material. Depending on the insulation layer, the flux may be generated in different levels. Flux affects how conductive the conducting material is, and thus a better dielectric improves conductivity and reduces reactance.

Battery Power (Cables)

Some cable companies argue that dielectrics (or sometimes conductors as well) have a memory and requires “burn in”. Some cables come with a battery which permanently runs a current through an auxillary conductor built into the cable to keep the “burn in” from disappearing when not used.

Battery Power (Equipment)

Mains power is AC and needs rectification before use in almost all hi-fi applications. Rectification turns AC into DC, however this conversion can be difficult in achieving optimum noise performance. Noise in the DC rails will usually cause additional noise in the system.

Batteries have very little noise. Mains conversion is usually the most practical method for high power devices such as amps, however battery powered units can be used for less power hungry units like pre-amps.

Cryogenic Treatment

Cryogenic treatment involves cooling a subject (usually a steel) to extremely low temperatures. This improves many properties including strength and durability of the metal. Its use on copper is not proven and there is poor theory on how it works, however audiophiles believe it improves the sound.

Demagnetisation (Demagnetization)

Some audiophiles believe that as signals are passed through a conductor, the conductor gets “magnetised” and will exhibit detrimental effects as a result. Demagnetisation CDs are used to pass a certain signal which “demagnetises” the conductors.

Gold / Silver / Rhodium Plating

Gold plating is commonly used in conductor contacts due to its ability to resist corrosion. This means that there are few artifacts on the surface of the conductor, leading to improved contact.

Silver is more conductive than copper or gold, and the argument is that a silver plated contact will also conduct better. However silver also tends to tarnish rather easily. When used to plate a copper strand, it enhances conductivity since much of the current in a cable tends to travel close to the surface.

Rhodium is a very rare metal, very hard and inert. This improves conductivity while resisting wear and tear on the conductor surface through its life.

Skin Effect

Skin effect is a phenomenon seen in conductors when a very high frequency signal is passed through. When this occurs, the majority of the current tends to travel through the surface of the conductor, rather than uniformly. There are many different designs proported to alleviate this problem, including ribbon conductors, conductors of varing diameters, etc.

EMI (Electromagnetic Interference)

Interference occurs when electromagnetic radiation is passed through a cable. Electromagnetic radiation causes variations in current, and hence interference. Interference in cables is reduced by designs which involve shielding or crossed/twisted conductors.

RFI (Radio Frequency Interference)

EMI in the spectrum of radio frequencies is called RFI. RFI is common due to the amount of radio frequencies in our airwaves.


This is an active list. We’ve started with mostly cable terms (mostly due to their wide availability) but we will continue to update this page as we think of more. If you can think of some, forward them to us!

  • Share/Bookmark

No Comments

The Real Issue Behind The iPhone 4 Antenna

When the iPhone 4 was first announced, I was pretty amazed at the design. I especially thought it was quite neat that the antenna was built into the frame of the phone.

Then they said that the frame was exposed, and that’s when my admiration stopped.

Seeing the launch and the fact that the frame was uninsulated metal (and hence requiring black bands on the sides to insulate the two antennas from each other) was a bit of a “hmmmmmmm…” moment.

I didn’t think much at the time of the ramifications. My area of work specialises in amplification and digital signal processing, not analogue transmission. The two thoughts that immediately came to my head was that one, it was going to have zero reception if the iPhone was in your pocket and you had keys touching the insulation band, and that two, surely apple would’ve extensively tested it for such a critical performance measure as phone reception.

Alas, only my first thought was correct. At the time, I didn’t think of the fact that your hands are a fairly good conductor of electricity at the signal level, and if you touched any antenna, even if it was just one antenna, you’re likely to be able to change the parameters slightly.

Moreover, you’d completely alter them if you were shorting a slither of insulation between two metal strips which both act as antennas.

An antenna works on a number of key parameters, including gain, resonant frequency, impedance and bandwidth. The antenna on the iPhone would have a certain set of these characteristics which is attributed to its length, thickness, shape and material. Changing any of these physical features will alter the electromagnetic characteristics of the antenna. (At the risk of showing my age, if you’ve ever used an indoor TV aerial, you would’ve noticed that touching the metal bits changes the reception.) If you connect two of these antennas together, nearly all of the above parameters will change, not to mention the possibly of interference between the two signals.

But that’s exactly what happens when you touch the black strip. Your fingers are reasonably good conductors, and since the strip is so thin, it’s likely that there will be significant electrical connection between the two metal parts, especially if your fingers are sweaty. Moisture on our skin act as great contact material due to its salt content. Plus, most of us who are alive also tend to be full of water and salt, which makes us good conductors (this is why we can get electrocuted, and it’s also how lie detectors work – they detect the resistance on your skin to test for how much you’re perspiring).

Users have reported that their reception drops when holding it in a particular way. Many online “experts” have falsely attributed this as a software problem, a bug, or a host of other issues clearly spoken by those who are more at home on facebook than in a lab. Let me tell you: it’s a design flaw. It’s a major, fundamental, “newbie” design flaw and I can’t believe a company like Apple could have not tested it properly.

I’ve always been an admirer of Apple. I’m not a fan of their business model, but the way they have designed and tested their products hitherto has been the key to their success. Their triumph of form supporting function is what has set them apart from their competitors. Is this the start of their downfall? Probably not for now, but given their ridiculously intensive product release timelines, it’s not surprising that these basic design problems are creeping up.

In any case, it brings me to my main point: Apple completely botched it up. And it just goes to show that even big companies make totally trivial mistakes. Apple would’ve had the design going for a number of years, and probably tested it for months before release. Yet such a fundamental problem was noticed by a lousy electronic engineer who saw a 5 minute video of the product launch.

  • Share/Bookmark

1 Comment

The Most Common Mistake In Hi-Fi: Gladwelling

A friend recently introduced me to a writer who has achieved worldwide fame and respect for a couple of his books, most notably Tipping Point and Blink. I became quite interested in what he has had to say after viewing a short clip of his views on the GFC, so I got myself a copy of his Blink to read while commuting to and fro work.

Malcolm Gladwell

Malcolm Gladwell - his thoughts are as outrageous as his hair.

Malcolm Gladwell is a Canadian writer who lives in New York. His works are mostly short, “research” and “data” filled commentaries on many subjects, most commonly on social sciences. In Blink – The Power of Thinking Without Thinking, he explores the theory that we very rapidly make subconscious decisions and intepretations and use them to make decisions. His arguments are largely based on studies and tests done by others into fields such as marital interaction, social studies, human intuition and prejudices.  If you are interested in this sort of thing, I’d suggest not to purchase his work. It’s a silly and glorified case of obsurdity that most people should be smart enough to realise that the author is a man of typical American religious naivety.

Gladwell’s theories are formed under studies and evidence he has gathered. One example is particularly notable.

A group of so called “scientists” did a study on gamblers. They placed four decks of cards, two blue and two red, on a table and asked someone to turn them over. Each card either won or lost the gambler some amount of money. The cards were rigged in a way such that the blue cards were, on a whole, optimal for winnings and generally produced favourable results with steady winnings and modest penalties, and the red cards were a minefield – having high rewards but higher losses. This team of “scientists” then measured how quickly the test subject noticed what was occuring by measuring the sweat glands on their subject’s skin. They found that the gamblers generally began to notice what was happening within about 50 cards. They also noticed that the gamblers started generating stress responses to the red cards by the 10th card.

His conclusion was that the gamblers “figured the game out before they realised they had figured the game out…long before they were consciously aware of what (was occuring)”.

This is the classic example of over-simplification and over-assumption – something I’d like to term “Gladwelling”. A completely plausible and much simpler explanation for these results would be that the gamblers correctly felt that the red cards had more risk. That is, more variance. Higher risk results in more shock, nervousness and emotion, which explains the readings. The subjects’ reactions may well have been a simple adverseness to risk, rather than a lower overall return.

What does this have to do with hi-fi?

Because this sort of thing happens all the time. People over-simplify the world despite the fact that there are clear reasons why we don’t just compare two figures to determine the properties of two real world things. Why does a Japanese sports car cost 1/5th of the price of an Italian thoroughbred, even though on paper they have nearly the same performance specs? Which idiot would buy a Leica camera if its specs were only as good as a Canon or Nikon? People need to read between the lines.

The fact that people say “X branded amplifier is more conservatively rated compared to Y branded” is in itself, proof that comparing specs is a silly exercise. If specs can be made to be more or less conservative, then their integrity is lost and scientifically speaking they are completely pointless. To me, a specification should be a standardised, strictly controlled way of measuring an extremely simple factor. RMS power, for example, should be true RMS and never “short term”. The whole idea that RMS could be “short term” is silly, since Root Mean Squared requires the signal to be evaluated over the longest possible period. Anything less is a compromise, and completely eliminates any science behind the figure.

Another thing that bugs me is the way people draw conclusions from “evidence”. I often find articles on the net from studies which show a whole range of ludacris “relationships”. For example, the other day I read a study which showed that obesity causes depression. Sure, there are tests which indicate that obesity is linked to depression. But how does one draw the conclusion that it is obesity which causes depression and not the other way round? How does one draw the conclusion that it is not a third factor which may be causing both, for example a gene or a mental health issue?

Anyone who has a basic understanding of statistics and sampling analysis would understand the concept of “data mining”. Additionally, the fact that even if a link is proven mathematically true, there can be no straightforward “cause-effect” conclusion without extensive further investigation.

In hi-fi, this is all too common. We draw conclusions based on the tiniest, most meaningless figures. The world is too complex to be represented by a set of numbers on a page. Don’t be a fool – question everything. Don’t be Gladwelling your view of the world.

  • Share/Bookmark

No Comments

The Greatest Lie of the Digital Age

How often have you heard this?

“It’s digital, so it’s 1s and 0s. That means there can’t be any errors or distortion. You either get the signal perfectly or you don’t.”

For someone who knows little about electronic systems, this would sound perfectly logical and true. In reality it’s a rather naïve way of thinking and it completely over simplifies the complexity of the world (and of physics). To show you why this statement is a lie, we need to go slightly into electro-magnetic physics. Don’t worry, not too much, but just enough so that we can see how digital transmission, and in particular, a thing called “channel coding” (and digital coding in general) works.

The Digital Signal

The digital signal is often classified as an array of 1s and 0s. This is true in the logic sense – we represent them in 1s and 0s. However they’re merely symbols. We could easily represent them as Xs and Ys or apples and oranges. In the real world, the digital signal is encoded and transmitted as a variety of alternating bits. This is usually a “high” and “low” signal.  The signal is always encoded to minimise the probability of error for a given channel at its signal power limit. With all signals, there is a chance of noise (because, as you would of course know, anything in the real world is essential analogue, not digital).

Bandwidth

When we picture the transmission of a digital signal, we usually think of it in the way of the following:

Theorical Digital Signal

Theorical Digital Signal

This is true in theory, and for higher level applications such as computer programs, controllers, etc. this is enough. In real life, the same digital signal (especially during transmission) looks more like this:

Real Life Digital Signal

Real Life Digital Signal

Why is this? Because unfortunately the real world isn’t as simple as on and off or high and low. Almost every communication channel (e.g. a cable, optical system, radio transmission) has a finite bandwidth. What is bandwidth? That’s the maximum speed that we can transmit data. I.e. there’s a limit to how fast we can switch from 0 to 1 or 1 to 0. When we put a signal with more bandwidth than the channel can handle, the signal will not come out the same shape. How differently the shape changes depends on the nature of the system and the signal. However, take it from me that when you put in a perfectly square digital signal like that of the first diagram, you’re most likely to get a real life signal like the second diagram.

The Bandwidth vs Cost Tradeoff

Engineering is often more about compromises than anything else. While we can build every network, cable, radio transmission mast with a huge amount of power to increase their bandwidth, it is not efficient. Generally there’s an acceptable number of errors we can take for a particular signal. For example, most people would be satisfied with a TV signal that works 97% of the time for 97% of the population. To achieve the extra 3% coverage, by nature, may cost double the investment in resources. Hence, it is generally not worth spending so much money to get perfection.

A digital signal, therefore, is usually designed to be optimised in terms of resources used. A good designer of a product would attempt to use the least amount of resources to achieve a satisfactory outcome. So if we talk about bandwidth, it would mean that we would use the minimum bandwidth to still achieve signal recognition to a satisfactory level.

The Eye Diagram

Here’s a concept that may interest those with a deeper understanding of probability and physics. When we pass a digital signal into a bandlimited channel, the signal is distorted. The receiver system will attempt to “guess” what the transmitted signal is. Let’s use a numerical scale where -1 is “0″ and 1 is “1″. How would the system guess? Simple: if the received signal is more than 0, we assume the transmitted signal is “1″. Otherwise if it’s any negative number, we assume the original transmitted level was -1, and the transmitted signal was “0″.

So in this instance, as long as no signal is distorted to the point where the signal crosses 0, we will still get an error free transmission.

The eye diagram is a diagram where multiple digital bit signals are laid upon one another. We can then see the variance in signal levels that occur to each bit, and see where they vary. The “eye” is the gap between the 1 and the 0. The gap therefore needs to exist in order for the receiver to be able to detect whether a signal is 1 or 0.

Eye Diagram

Eye Diagram

Here we have an eye diagram. We can see the most probable bit locations as the darkest areas. The eye is the two gaps which is formed by the two bit signals shown. As the signal becomes more distorted, the width of the signals increase (probability of error increases) and the eye starts to “close”. The point where the eye is completely closed is the point where the digital signal is distorted to the point of pure noise (no useful information can be extracted).

Most systems are designed with eye diagrams similar to that above. There’s little chance of error. However, errors still occur. We can still see a few bits getting rather close the center of the eye. These are the bits which are likely to cause error bits.

What Does All This Mean?

This means that any commercial product (i.e. a product designed to make money) will be designed to optimise for cost as well as performance. If we talk about say, a HDMI cable, we would expect that any cable more than a very short length will expect to have a more than negligible probability of error. Generally speaking, with most digital signals there’s always a chance of error. When an error occurs, it doesn’t mean that you won’t get a signal at all. It just means that there’s an error. These could be single bit errors which may be visible and uncorrected, or may be corrected depending on the mechanism. However errors exist, even though you may not know about them. Digital is far from perfect.

Going back to the HDMI cable, let’s assume that the cable is now quite long. This means that the bandwidth is reduced. Performance is reduced because bandwidth is reduced and more errors can now occur. A badly made HDMI cable therefore shows much more significant errors. (People say that digital signals are perfect, yet it’s no secret that long HDMI cables can have problems. Why doesn’t anyone question this?)

Channel Coding

Here’s more proof that digital is not perfect. Channel coding is a study of the logics behind digital transmission. How do we best encode digital signals to ensure that the transmission has the best probability of success, and uses the least resources?

Channel coding exists everywhere. Let’s talk about one common place you might see this in action – your CDs. Most people think about CDs as the forementioned “perfect” and “distortion free” transmission. But think about it: tiny specs of dust, scratches, imperfections exist on every CD. The bits recorded on a CD are also tiny, and a laser will have no chance in telling a piece of dust for something else. This means that no CD will ever play without distortion at the bit level. I bet most people don’t know this fact.

However, there’s good reason why you may have not known about this. The guys who made the CD, Blu Ray, HDMI, etc. were pretty smart. They used a number of methods of digital transmission called “channel coding”. In a CD, this consisted of interweaving bits of information, optimising the type of transmitted information, and using checking bits to ensure that when errors did occur, there was a high chance that it was corrected and/or a close guess was used.

How does this work? There are a whole heap of methods they use and I won’t even scratch the surface. But let me demonstrate one…very basically. Let’s take for example, a number we need to transmit over a channel. Let’s say 17. In binary, 17 is 10001. Let’s assume a one bit error occurs. Because there are 5 digits, we could get 5 possible errors – 10000, 10011, 10101, 11001, 00001. Notice what these numbers equal: 16, 19, 21, 25, 1. A 1 bit error could result in a slight distortion (16 instead of 17), or a huge distortion (1 instead of 17). In fact, if transmitting numbers in this way, the magnitude of distortion varies with the number of digits transmitted. If we transmit a 256 bit number, we could get a massive error due to just one bit being incorrect.

The engineers behind this technology realised this and decided that it was better to encode bits which were close together (read about Hamming distance).  What they did was formulate a number of bit sequences which represented the transmitted data, where the sequences were different in the same way that the data was different. For example, let’s arbitrarily represent 16 as 1011, 17 as 1001 and 18 as 0001. Notice that the difference between 17 and either 16 or 18 is 1 bit, and the difference between 16 and 18 is two bits. They are of the same difference as that between the numbers themselves.

What does this mean?

This means that when a spec of dust distorts a bit on a CD, it may mean that instead of 25355, the CD reads 25356. During a track, you will probably not be able to hear this, but the distortion will occur much like analogue distortion will occur. Digital simply means that there is less chance of this happening.

Digital is undoubted much better than analogue in many ways, and without it we would not live in the same world that we do today. However, the technology behind digital is much more analogue than you think. Digital is far from perfect – don’t assume that just because something is digital, it would be distortion free.

  • Share/Bookmark

14 Comments

Loudspeaker Specifications Explained

DISCLAIMER

Yes, some of these terms are a little simplified, but this is probably the neutral ground between getting too detailed and being practical.


Introduction

Loudspeaker specs come in all forms and they mean a whole range of things. This is a list of the most important and most common ones you should know. Remember that no spec by itself defines any loudspeaker, and no spec has the property of “the bigger/smaller the better” (this generally applies to more than just loudspeakers and hi-fi). Most of these details are just a property of design.

Anyway. Ever wondered what those fancy terms mean? Well, here’s a basic explanation.

Impedance

Impedance is an electronic attribute of a loudspeaker. In layman’s terms, it’s how much a loudspeaker will resist the current output from amplifier. It is not simply resistance – it incorporates a complex function called reactance.

They typically come in 4, 6 or 8 ohms. Impedance really means minimum impedance, which is achieved when the loudspeaker demands the most power. All other factors equal, the lower the impedance, the more power the speaker demands. You must match the capabilities of your amp to the requirements of the loudspeaker. Modern day amplifiers can usually handle speakers down to 4 ohms, but some are rated to 6 or 8 ohms. You can always use a higher impedance loudspeaker than the minimum rated impedance for an amp, but not vice versa. So if you purchase 4 ohm speakers, you shouldn’t use an amp rated at minimum 8 ohms.

Like many attributes, impedance is a factor of design. The way a loudspeaker’s impedance is designed affects its sensitivity. The smaller the impedance, the more power the speaker demands, more current is required from the amplifier and therefore more sound per amplifier output. This may sound like an advantage, but it isn’t necessarily a good thing. A high impedance means less sensitivity, but if the impedance is too low, it means that the amp will need to work extra hard to cope with the loudspeaker’s demands. As a result this can be detrimental to sound quality.

Remember, the impedance of the speaker is by no means a measure of quality. Both speakers and amps of all price ranges come in various impedance configurations. However you do need to make sure that the amp is suitable for your loudspeaker.

Power

For most beginner hi-fi consumers, the power rating of a loudspeaker is probably the most looked at, yet the least relevant specification. It is a completely pointless figure which realistically doesn’t mean anything for the average consumer. “Power” is just the maximum power a loudspeaker can theoretically handle without damage. In reality, loudspeakers rarely fail due to an over supply of power. You can read more about why power doesn’t matter here.

Distortion

Distortion typically means THD, which stands for Total Harmonic Distortion. Harmonic distortion is the distortion that occurs at the harmonics of the input signal, which is typically the most significant contribution to total distortion. Harmonic distortion occurs because the energy conversion system is not linear. At first, it sounds like THD is the only factor to consider for sound accuracy – it sounds like a simple expression of the percentage of sound which is distorted. However THD can be measured in many different ways (power or amplitude, band-limited or “white”). Adding to this, THD can occur in very different forms, and especially relevant is in the order and range of frequencies. Research has shown that first order distortion is much more “audible” than second order. Also distortion in frequencies away from those most sensitive to our ears (~1kHz) will be not nearly as evident to the listener. This means that 1% THD could “sound” much less distorted than 0.01% if the distortion occurred “favourably”. This is generally accepted as most evident in tube amps, where measured THD is typically >100 times greater than its solid state equivalent.

Sensitivity

People who want to buy “high wattage” speakers and want their speakers “loud”, should look to this as the most important factor. Sensitivity is a measure of how efficiently loudspeakers convert electric energy to sound. It’s typically expressed in dB/m/W. Because it’s expressed in dB, every 3 dB of sensitivity indicates the speaker to be twice as efficient. Given that most speakers range between 86 and 92dB, they can be easily an 8 fold difference in sensitivity. This is a factor that could make a 10W amp sound like 80W or vice versa. Ironically, it’s usually overlooked by those wanting “loud” speakers or just misunderstood.

Frequency response

In engineering, frequency response is generally defined as the range of frequencies that a signal magnitude is constant. For loudspeakers, it’s the range of frequencies the sound remains equally loud. This definition isn’t even applied “stringently” in theory, as “constant” allows a 3dB drop off the sides. In real life, two adjacent frequencies will never have the same response.

This is another spec which could be made quite useful but often is not a tested fact, or follows any standard to be very helpful. A proper and honest test would rate frequency response as the bandwidth (the range of frequencies) in which the signal does not fall below 3dBs of the peak. By definition, any speaker which is poorly made may not even have an acceptable frequency response due to the large fluctuations in frequency response throughout its core operational frequencies. I’ve seen many a frequency response plot on reputable brands where there are 3+dB fluctuations in its response, yet the frequency response is considered constant until the bottom and upper falloff frequencies. (Have a look at an example of a frequency response from a “reputable” manufacturer below. Believe it or not, this is the frequency response of their $10k flagship speakers. You can clearly see more than 3dB variance in more than one place.)

Frequency Response of a High End Loudspeaker From a Major Brand

Frequency Response of a High End Loudspeaker From a Major Brand

The main danger, however, in comparing frequency response is the magnitude of error it can generate with even a slight “fudging” of figures. Extending the cut-off point by an extra few dBs can completely change the response figures. And without the graph of the plot at hand, you would never know what it means. These factors render the figure quite useless.

Even if we assume that frequency response is accurate, it ignores the entire other half of signal reproduction fidelity – phase. Frequency response deals only with amplitude. Our human ears perceive phase with as much or more importance than amplitude when it comes to sound “realism”. In a live music arena, amplitude is easily and significantly affected by things like seating position, reflections off walls, interference with objects, etc. However, we rarely notice this as detrimental to our experience. Phase, on the other hand, corrupts our perception of direction. This makes a live experience sound like a recorded one. While it is nearly impossible for any loudspeaker manufacturer to provide any meaningful figures with phase, it needs to be noted that frequency response is only a small fraction of the complete story.

Baffles

“Baffle” refers to the mass either side of the diaphragm (the cone of the loudspeaker). When the loudspeaker was first invented, it stood alone and effectively had an infinite mass of air on either side (this is called “open” or “infinite” baffle). As the technology improved, people realised that a higher gain (increased sensitivity), as well as better frequency response could be achieved by enclosing one or more sides of the diaphragm in a fixed volume of air – that is, in a box.

Additionally, two or more diaphragms could be put in the same box to work together to improve the response. And adding even more complexity is the use of pipes or “ports” to make ported loudspeakers, which have two resonant frequencies. This helps to extend response range, typically in the lower frequencies.

There are hundreds of various ways of designing the basic structure of a loudspeaker. It would require a book or two just to explain any of them in detail, however I will try to briefly describe the properties of the most common terms.

Sealed Enclosures

Sealed loudspeakers are the simplest and one of the most common types of enclosures. It’s just a diaphragm and a box. The air is fully enclosed and cannot escape, forming a spring for the diaphragm to “bounce” off. Simple doesn’t mean cheap, though. Sealed enclosures are used from the most basic to some of the most expensive models.

Ported Enclosures

Ported loudspeakers are a speaker with a hole somewhere on the enclosure. The hole is not just an opening but a usually a circular pipe. A pipe introduces another factor of resonance which can help in extending the frequency range. It has, therefore, the properties of a sealed enclosure with the addition of some extra range. But sometimes this may be at the detriment of the clarity. Ported speakers are the most commonly made.

2 Way Speakers

2 way means that there two groups of drivers working together to achieve the full sound. Typically, this means 1 x tweeter and 1 x woofer or 1 x tweeter and 2 x woofers.

3 Way Speakers

Same as above, but one extra group, usually 1 x tweeter, 1 x mid-woofer and 1 x woofer.

Crossovers

The source of how the speaker components are working together comes from the internal electronic configuration. A sound signal will contain many frequencies, however it is nearly impossible to design a driver which can handle all frequencies well. Usually they are configured to handle a range of frequencies as described above. A woofer, for example, will handle the mid to low range, and a tweeter the mid to high range. The electronics which divides up the signals and controls the transition between the frequency boundaries are called the crossovers.

And there you have it! A brief overview on the most common loudspeaker specifications. We’ll keep adding to this page as more questions come about. To ask us a question, click to contact us.

  • Share/Bookmark

10 Comments

Why Wattage Does Not Matter

DISCLAIMER

Yes, I know. Some of the technical issues here may be overly simplified. But that’s not the point. I’m just trying to teach beginners and the common bloke how to go about avoiding being lured into marketing hype.


Some of the most commonly asked questions from the common consumer hi-fi purchasers are questions like:

“Is X watts enough?”

“I have Y watt speakers, how thick of a cable do I need?”

“This amp is 150W, so I going to be getting more watts from this 170W one, right?”

Wattage, or the rated maximum power of a hifi component is probably the last thing you’d want to consider when purchasing equipment. Only newbies ask about wattage, much like only newbies compare megapixels in camera gear. Hi-fi doesn’t work the same way as power tools, and even power tools don’t exactly go by a watt-per-dollar comparison. There are good reasons why 40W amps sell for thousands of dollars and 200W amps for $50. Hint: it’s not the wattage.

Speakers

Without going into the really technical bits, let’s assume a simple scenario – you’ve just bought an amplifier and you want to buy some speakers to go with them. Most of us hifi consumers, being males, believe that bigger is better. We often become conned by slick marketing into thinking that a 200W speaker should surely be better than 100W, and if you’d just purchased a 150W amp, then the speakers better be 150W or more, otherwise it won’t be able to handle the power.

The rated power of a speaker is simply the maximum power it is claimed to be able to handle without damage. So even by layman’s logic, you could hook up a 200W amp to a 100W pair of speakers, have the volume up to half way, and all will be fine. (In actual fact, this is even less of a concern, but I will explain later.)

The Log (Exponential) Scale

Before we continue, I’m going to explain the single most important concept in the watt myth: the log scale. The log scale is a mathematical scale based on the logarithm. The inverse of this is the exponential or the power (I don’t mean the power in watts, but the power as in 10^2 = 100). For simplicity and the purposes of this article, I’m going to talk about these things interchangeably as it is just the “direction” to get from one number to another. We are normally accustomed to what is known as a linear scale: 1, 2, 3 …10 etc. I.e. the difference in magnitude as we progress upwards is a constant number. The exponential scale differs in that the difference as we progress is a multiple of the number before. (And the opposite is the log, where we count in a nth root of a number). In other words as we count upwards we need to multiply the previous number by a number, for example: 10, 100, 1000 or 2, 4, 8, 16. As you can see, the magnitude of difference is “increasingly increased”.

A rough graphical indication of this relationship is as follows:

Exponential vs Logarithm

Exponential vs Logarithm

The Decibel

The decibel (dB) is a measure of size which is based on the log scale above. It is frequently used in many fields of engineering to help denote the size of numbers. This is because in nature, the measure of quantity often appears not as a linear relationship, but more as a log one. Human perception of sound (as well as vision, for that matter) occurs in a log relationship. Most of us are born with ears that can detect a tiny fraction of a watt, right across to hundreds or even thousands of watts (although the latter is not recommended). Due to our huge range of hearing, our ears are sensitive not to the absolute value of the energy transmitted into our ears, but the multiple. That is, 2 watts will sound a little louder than 1 watt, and 4 watts will sound louder than 2 watts, but only by as much as what 2 watts sounded louder than 1 watt. Hence, to measure our sense of sound rather than the absolute value, it makes sense to denote this in a log scale. As a guide, every 3dB is approximately double the power.

By now, you can probably see where I am going.

Frequency Response

Frequency Response

Here’s a typical frequency reponse graph. Notice how SPL is linear while the frequency is on a log scale. SPL’s relationship to power is also approximately log, so imagine the frequency response’s axis as power (turn it vertical) and think about the implications of increased power. It doesn’t make that much of a difference unless power is increasing exponentially.

Sensitivity

The purpose of a loudspeaker is to convert electrical power into sound. Ironically, most consumers don’t even look at one of the most important factors in this – sensitivity. Sensitivity is the factor of how efficiently the loudspeaker does what it does. It is measured in dB/W at a certain distance from the front of the speaker, usually 1m (hence dB/m/W). In other words, it is how loud a speaker is for every watt input, a meter in front of the speaker. So if you’re comparing two speakers of exactly the same type, and one rates 3dB more sensitive, then the speaker is twice as efficient as the other. That means it will only need half the amount of power to be equally loud. Most commercial Hi-Fi speakers range between 86-92dB/W in sensitivity, but it could even range a few dBs further than that. Now, the difference becomes apparent. Some speakers can be 8 or more times more efficient than others!

Volume

Most people don’t realise this, but for all practical purposes, even if speaker power ratings were relevant, you’d never use its maximum power. Again, the dB scale applies here. The volume control on most amplifiers are log scale. Assume that the volume ranges from 0 to 100%, and you have a 100W amp. Because it’s a log scale, it means that for low volumes, you will only be getting a tiny bit more power at every additional increment. If we assume that at maximum power, you have 100dB of sound, it means that for every +1 level of volume, you’ll likely receive roughly a 1dB increase in volume. So in theory, at 97% volume you’ll only be getting 50W of power. At 94% volume, you’ll be getting just 25W of power, and so on and so forth. In theory, at 50%, you’ll only be using 1mW of power (1/1000th of a watt). In real life, this is probably less extreme because manufacturers try to limit the amount of power you can get up the top end to prevent equipment damage and skew the volume distribution. Nevertheless, at 50% volume (the “real” maximum volume for using your amp for reasons discussed later), you’ll be using a small fraction of your amp’s maximum rated power.

But I want loud loudspeakers!

For those of us who are happy to spend a modest amount of money on hi-fi, you’ll have a choice. Either go for quality or quantity. For a couple thousand dollars you can buy speakers and an amp which sound accurate and are comfortably classed as “audiophile grade”, but they won’t be able to produce the thunderous (read: muddy) bass or thrilling theatre sound that, although loud, is a poor representation of good sound. I prefer accuracy much over quantity, but hey – each to their own. If you want quantity, make sure the amp is good and the speakers can take a beating.

Use your ears

As I say every time I get asked, this is the factor that really matters. In the end, it’s all about what you hear, and the only way to know what the results would be is to use your ears. Go to a store and listen to a few speakers to get a feel of which brand or type you like the sound of. Make sure you do this under controlled conditions (test two same speakers with the same amp, source, room, volume, cables, etc.).

While judging and comparing power ratings directly between two amplifiers is a little futile, comparing power ratings between two amplifiers made by the same brand, in the same model “series” can give you an indication of relative output between them (although you could probably judge this by the model ranking anyway). There are multiple factors which the more “audiophile” or “expert” consumers would consider benefits in the power department, and some of these include class, transformer size, design purpose, to name a few. Funnily enough, these “specs” are seldom included (or honest) in spec sheets.

Set up the right equipment for your room

As a general guide, what is infinitely more important than most of these factors is getting the right equipment for your room. While there are many room shapes and covering materials which can change the effect of sound produced within it, having the right equipment for the room size is one of the most important decisions you need to make. As a room gets bigger, its volume increases exponentially with its dimensions. If you increase a 4x4m square room by 1m each side (a 25% increase), you’re increasing the volume by 50%, since the 1m extra is squared. Sound propagates evenly in a room and hence, the larger the volume, the more output you will need to achieve the same sound level. Buy an amp with enough grunt for your room. Buy speakers to suit the amp.

Aim for more amp power

“I have a 200W amp, I’ll need a 200W pair of speakers.”

Speaker power is a completely useless figure. Speakers don’t fry because the amp gives too much power – they fry when the amp clips. Amplifiers clip when they are pushed beyond what they can output. A good amp shouldn’t clip for their rated load up to full power. Poor amps will clip when speaker load is too demanding for what they can give. Speaker load, by the way, has absolutely nothing to do with power handling. Buy a good amp, and don’t worry if speakers are rated 70W if your amp is 200W. They’ll never fry unless you push far beyond ordinary limits.

If you want party speakers…buy party speakers

Some people are so obsessed with having a loud hi-fi that they overdrive, and inevitably either damage their equipment or get hearing loss…or both. This is not hi-fi. Hi-fi stands for high fidelity, which means a high level of fidelity to the original sound. Think about it – if you want stuff overly loud, this can’t happen. If you want speakers to shake your neighbour’s knickers, buy party speakers. Not only are they cheaper, they nearly can’t be damaged no matter how hard you drive them.

Manufacturers are liars

All of what I’ve said so far doesn’t even matter in the real world because of one simple fact: manufacturers are liars. Manufacturers these days have little guidance as to what they can put on spec sheets. Power could mean PMPO, short term RMS, long term and anything else in between. They can enter a number and call it whatever they want. There is no governing law or industry standard to measure power, and certainly little standardisation or any internationally enforced system. Power can be measured in a thousand different ways and each with a completely different outcome. A well trained technician can make “10,000W” appear out of a 10W amp, and a guy in the marketing department can easily “mistaken” PMPO for RMS. What incentive does any manufacturer have to be truthful about what they say on a spec sheet of pure marketing? Adding to this is the fact that some reputable manufacturers are honest while others are not, makes comparing power like going with the person who shouts the loudest.

Power doesn’t matter

It’s pretty simple. Power doesn’t matter. Look at power if you want to feel like you’re buying something powerful or if you want to brag about the label to your friends. But it’s no indication of the quality of the product you’re buying. If you want good sound, or even truly powerful sound, use your ears. The sort of spec fudging doesn’t just happen in the hi-fi industry, but in just about every industry which targets the (usually male) ego. They want you to feel like you’ve bought a million watts of fearsome power. Whether it amounts to anything or is of any use, is a completely different matter.

  • Share/Bookmark

1 Comment