The Greatest Lie of the Digital Age


How often have you heard this?

“It’s digital, so it’s 1s and 0s. That means there can’t be any errors or distortion. You either get the signal perfectly or you don’t.”

For someone who knows little about electronic systems, this would sound perfectly logical and true. In reality it’s a rather naïve way of thinking and it completely over simplifies the complexity of the world (and of physics). To show you why this statement is a lie, we need to go slightly into electro-magnetic physics. Don’t worry, not too much, but just enough so that we can see how digital transmission, and in particular, a thing called “channel coding” (and digital coding in general) works.

The Digital Signal

The digital signal is often classified as an array of 1s and 0s. This is true in the logic sense – we represent them in 1s and 0s. However they’re merely symbols. We could easily represent them as Xs and Ys or apples and oranges. In the real world, the digital signal is encoded and transmitted as a variety of alternating bits. This is usually a “high” and “low” signal.  The signal is always encoded to minimise the probability of error for a given channel at its signal power limit. With all signals, there is a chance of noise (because, as you would of course know, anything in the real world is essential analogue, not digital).

Bandwidth

When we picture the transmission of a digital signal, we usually think of it in the way of the following:

Theorical Digital Signal

Theorical Digital Signal

This is true in theory, and for higher level applications such as computer programs, controllers, etc. this is enough. In real life, the same digital signal (especially during transmission) looks more like this:

Real Life Digital Signal

Real Life Digital Signal

Why is this? Because unfortunately the real world isn’t as simple as on and off or high and low. Almost every communication channel (e.g. a cable, optical system, radio transmission) has a finite bandwidth. What is bandwidth? That’s the maximum speed that we can transmit data. I.e. there’s a limit to how fast we can switch from 0 to 1 or 1 to 0. When we put a signal with more bandwidth than the channel can handle, the signal will not come out the same shape. How differently the shape changes depends on the nature of the system and the signal. However, take it from me that when you put in a perfectly square digital signal like that of the first diagram, you’re most likely to get a real life signal like the second diagram.

The Bandwidth vs Cost Tradeoff

Engineering is often more about compromises than anything else. While we can build every network, cable, radio transmission mast with a huge amount of power to increase their bandwidth, it is not efficient. Generally there’s an acceptable number of errors we can take for a particular signal. For example, most people would be satisfied with a TV signal that works 97% of the time for 97% of the population. To achieve the extra 3% coverage, by nature, may cost double the investment in resources. Hence, it is generally not worth spending so much money to get perfection.

A digital signal, therefore, is usually designed to be optimised in terms of resources used. A good designer of a product would attempt to use the least amount of resources to achieve a satisfactory outcome. So if we talk about bandwidth, it would mean that we would use the minimum bandwidth to still achieve signal recognition to a satisfactory level.

The Eye Diagram

Here’s a concept that may interest those with a deeper understanding of probability and physics. When we pass a digital signal into a bandlimited channel, the signal is distorted. The receiver system will attempt to “guess” what the transmitted signal is. Let’s use a numerical scale where -1 is “0″ and 1 is “1″. How would the system guess? Simple: if the received signal is more than 0, we assume the transmitted signal is “1″. Otherwise if it’s any negative number, we assume the original transmitted level was -1, and the transmitted signal was “0″.

So in this instance, as long as no signal is distorted to the point where the signal crosses 0, we will still get an error free transmission.

The eye diagram is a diagram where multiple digital bit signals are laid upon one another. We can then see the variance in signal levels that occur to each bit, and see where they vary. The “eye” is the gap between the 1 and the 0. The gap therefore needs to exist in order for the receiver to be able to detect whether a signal is 1 or 0.

Eye Diagram

Eye Diagram

Here we have an eye diagram. We can see the most probable bit locations as the darkest areas. The eye is the two gaps which is formed by the two bit signals shown. As the signal becomes more distorted, the width of the signals increase (probability of error increases) and the eye starts to “close”. The point where the eye is completely closed is the point where the digital signal is distorted to the point of pure noise (no useful information can be extracted).

Most systems are designed with eye diagrams similar to that above. There’s little chance of error. However, errors still occur. We can still see a few bits getting rather close the center of the eye. These are the bits which are likely to cause error bits.

What Does All This Mean?

This means that any commercial product (i.e. a product designed to make money) will be designed to optimise for cost as well as performance. If we talk about say, a HDMI cable, we would expect that any cable more than a very short length will expect to have a more than negligible probability of error. Generally speaking, with most digital signals there’s always a chance of error. When an error occurs, it doesn’t mean that you won’t get a signal at all. It just means that there’s an error. These could be single bit errors which may be visible and uncorrected, or may be corrected depending on the mechanism. However errors exist, even though you may not know about them. Digital is far from perfect.

Going back to the HDMI cable, let’s assume that the cable is now quite long. This means that the bandwidth is reduced. Performance is reduced because bandwidth is reduced and more errors can now occur. A badly made HDMI cable therefore shows much more significant errors. (People say that digital signals are perfect, yet it’s no secret that long HDMI cables can have problems. Why doesn’t anyone question this?)

Channel Coding

Here’s more proof that digital is not perfect. Channel coding is a study of the logics behind digital transmission. How do we best encode digital signals to ensure that the transmission has the best probability of success, and uses the least resources?

Channel coding exists everywhere. Let’s talk about one common place you might see this in action – your CDs. Most people think about CDs as the forementioned “perfect” and “distortion free” transmission. But think about it: tiny specs of dust, scratches, imperfections exist on every CD. The bits recorded on a CD are also tiny, and a laser will have no chance in telling a piece of dust for something else. This means that no CD will ever play without distortion at the bit level. I bet most people don’t know this fact.

However, there’s good reason why you may have not known about this. The guys who made the CD, Blu Ray, HDMI, etc. were pretty smart. They used a number of methods of digital transmission called “channel coding”. In a CD, this consisted of interweaving bits of information, optimising the type of transmitted information, and using checking bits to ensure that when errors did occur, there was a high chance that it was corrected and/or a close guess was used.

How does this work? There are a whole heap of methods they use and I won’t even scratch the surface. But let me demonstrate one…very basically. Let’s take for example, a number we need to transmit over a channel. Let’s say 17. In binary, 17 is 10001. Let’s assume a one bit error occurs. Because there are 5 digits, we could get 5 possible errors – 10000, 10011, 10101, 11001, 00001. Notice what these numbers equal: 16, 19, 21, 25, 1. A 1 bit error could result in a slight distortion (16 instead of 17), or a huge distortion (1 instead of 17). In fact, if transmitting numbers in this way, the magnitude of distortion varies with the number of digits transmitted. If we transmit a 256 bit number, we could get a massive error due to just one bit being incorrect.

The engineers behind this technology realised this and decided that it was better to encode bits which were close together (read about Hamming distance).  What they did was formulate a number of bit sequences which represented the transmitted data, where the sequences were different in the same way that the data was different. For example, let’s arbitrarily represent 16 as 1011, 17 as 1001 and 18 as 0001. Notice that the difference between 17 and either 16 or 18 is 1 bit, and the difference between 16 and 18 is two bits. They are of the same difference as that between the numbers themselves.

What does this mean?

This means that when a spec of dust distorts a bit on a CD, it may mean that instead of 25355, the CD reads 25356. During a track, you will probably not be able to hear this, but the distortion will occur much like analogue distortion will occur. Digital simply means that there is less chance of this happening.

Digital is undoubted much better than analogue in many ways, and without it we would not live in the same world that we do today. However, the technology behind digital is much more analogue than you think. Digital is far from perfect – don’t assume that just because something is digital, it would be distortion free.

  • Share/Bookmark
  1. #1 by dejavous on March 15, 2010 - 2:28 pm

    Very useful.It answers a lot of questions.First it has removed the notion that digital means perfect.Also I will not spend a lot of money buying expensive cables because for my living room needs the difference would be too small for me to notice.Ofcourse that doesnt mean Im going to get the worst cheapeast possible thing that there is……like you so rightly mentioned its all about compromise so that 97 % of the time Im getting 97% of the quality.Thanx brilliantly explained to a layman like me

  2. #2 by TheITGuy on April 7, 2010 - 2:07 am

    Reasonable article – But way off the mark with certain comments, as it doesn’t take into account “error detection / correction”.

    “This means that when a spec of dust distorts a bit on a CD, it may mean that instead of 25355, the CD reads 25356.”

    Not true. I won’t go into detail, but if it reads the value wrong, it will know. It will then try to determine the correct value based on the data it already has. Unless the signal is severly degraded, the error correction works and the CD plays perfectly as intended.

    Sometimes the correction doesn’t work – that is when you get popping / artifacts (caused by an incorrect error correction), or skipping / complete failure (not enough information can be read).

  3. #3 by ed on April 11, 2010 - 1:23 am

    You are almost absolutely right. But there are two things here. Firstly I’m trying to explain a rather complex lot of issues to an audience which wouldn’t be experts in digital signal theory, so gross generalisations and “approximate truths” need to be used. Additionally, I’m not saying that the CD player will OUTPUT 25356. (Read is different from output). As described in previous paragraphs, and as you correctly state, there are provisions for correction of digital signals read incorrectly (check bits, etc. designed to “know” when something is wrong and correct if necessary). However the Reed-Solomon coding that you are refering to doesn’t have a sudden failure behavior as you have described. It will NOT play back perfectly at certain times of high failure, but it will also NOT fail completely to the point where it is audible. Most modern CD players will employ a complex degree of interpolation to fill in the gaps, i.e. to guess the original bit levels. Hence: “During a track, you will probably not be able to hear this, but the distortion will occur much like analogue distortion will occur.“.

  4. #4 by Steve Franklin on August 12, 2010 - 5:26 am

    I love these theorhetical questions and reminds me of the conversations supposed photographers have over lens resolutions. They worry so much about one lens to the other and rarely take photographs.

    In the real world, the above discussion is only relative where the HDMI cable is long. For 95% of people, the amp/ps3/blu-ray player is 2m or less from the tv. And I defy anyone to tell the diference between a $300 monster cable and a $5 ebay cable over that difference.

    In theory, you’re right – in reality (where we live) it doesn’t matter over short distances.

  5. #5 by Steve Franklin on August 12, 2010 - 5:33 am

    Further more, whilst I agree with what you say in general. The issue here is that retailers try to make the transmission of digital signal analogous with analog signals, where is does make a difference.

    Jb hifi/harvey norman quite often make more profit on the accessories than the TV which is usually agressively priced by trying to fool people that a cheap HDMI cable is analogous to the rubbish RCA leads provided with the cheapest equipment.

    THAT is the issue.

  6. #6 by ed on August 17, 2010 - 2:46 am

    Steve you have some good points.

    I personally believe that people are either too in the black or white, and don’t live enough in the grey area that the world actually revolves around. There are people who spend $200 on a HDMI cable that works pretty much the same as a $30 one, but then there are also retailers who offer $5 cables which are so bad they don’t fit properly, or are lossy enough to create mosaic images, or worse still, don’t work in some more fundamental way (Trust me, I’ve seen them. A friend bought one off ebay once and couldn’t even return it because it wasn’t worth the postage).

    If you play a blu-ray movie through a top end TV with a poor $5 cable, you might actually see some artifacts even with a short 2m HDMI. If you think about it, at even at 99.9999% success rate, it leaves 1 bit error per 1,000,000 transmissions. A single frame in blu-ray has god knows how many million pixels. While inherent noise control within the HDMI will control most of this noise, it’s highly possible that a cheap manufacturer will produce a cable that is noticeably sub standard. If you even increase that to 5m – still a short length, many of these cables start exhibiting obvious problems. I’m not being theoretical here, it happens all the time in the industry.

    There are people who spend $20, 50, 100k+ on their hifi yet they have limited amount of music. And that’s exactly like the photographers who worry too much about their equipment but don’t take enough photos. But on the other hand there are also people who constantly argue that just because something is digital, it must be error free, which is such a giant pile of rubbish. No transmission will ever guarantee zero error rates.

    And hey, I’m not saying that a cheap cable wouldn’t be sufficient for most people. There are people who are 100% happy listening to backstreet boys on their ipod with 128bit rate MP3. But don’t you think people these days are a bit “info cocky” on reading a few web articles?

  7. #7 by Steve Franklin on August 18, 2010 - 5:33 am

    Ok so here is the thing I don’t understand with your argument about losing bits.

    Lets just say I have a simple static wire connected from a to b and i transmit data along it.

    If I am seeing 1 bit error per 1000000 transmissions, is the problem likely to be the cable or something else? For 999,999,999 bits to have arrived safely and one goes missing, I think we’re talking about the realm of quatum physics where almost anything BUT the cable would explain that kind of error.

    But it’s an interesting point. How would that situation actually arise? how could you lay the blame on a cable that has transmitted 99.99999% of the bits? Using occams razor what is more likely? that the laws of physics remain intact in a solid state wire OR that something in the firmware/interface at either end is to blame?

    I think the reality of the situation with digital seems to be things appear to be in one of three states.
    1. They work
    2. The signal is intermittant/marginal because of connection/limit of cable length
    3. They don’t work

    If the signal transmitted, is decent enough for the amplitude of the signal to be deciphered as either 1 or 0, then all the bits will get through the same.

    The only thing I can see there being a difference when the signal is right on the margins of what can be acceptably deciphered at the receiving end.

    However, as I say, I’ve no training in this area, I’m only interested in what’s right not who’s right.. :-D

  8. #8 by ed on August 21, 2010 - 1:43 pm

    If you are seeing a 1 bit error per 1,000,000 transmissions, the problem is generally due to noise. Noise can be from anywere: inherently within the transmission hardware, the power supply, etc. However generally speaking, a well designed system should have negligible internal noise, which means that the overwhelming majority of noise comes from the cable.

    Have you ever noticed that if you use your mobile phone while you’re near an electronic device, it may start to emit a series of strange crackly beeps? That’s a high level of noise picked up from the environment. Noise is a stochastic (random) process, and every cable is affected by external electro-magnetic radiation. This is not quantum physics, it’s just simple noise physics, and cables are by far the greatest contributor to noise within a wired system (in fact, this includes internal cabling within the device, including the circuit board).

    You need to realise that cables are hardly perfect devices. In high school you learn that wires have zero/negligible resistance. But the keyword here is “resistance”, which doesn’t affect “bandwidth”. Once voltages/current begin switching (and especially at a higher frequency), cables begin to act in very non-ideal manners. They start distorting the signal because they also have something called “reactance”. If cables were perfect, why don’t we have infinitely fast internet? It’s because there’s only a finite amount of speed at which we can switch data. If we’re pushing mega or giga hertz, we’re pushing the limits of what conventional copper wire can do, and so if we also start on saving materials/design complexity, then we start allowing the probability of non-negligible noise being introduced into the system. (If you’re interested in specifics, you can wikipedia/google any of those terms and read more about how they affect bandwidth in cables).

    Digital cables don’t have a sharp drop off point in performance. They dont’ just “work” or “not work”. They usually have an accepted design probability (e.g. probability of bit error is X). If you transmit data through a cable, it doesn’t come in the form of 1s and 0s, but a high or low level. As noise increases, these levels start varying, and if too much noise is present they may cross (in which case errors occur). This failure isn’t sudden: a cable can vary in their levels by usually a little bit, but occasionally they may cross over. This is just plain old noise.

    “If the signal transmitted, is decent enough for the amplitude of the signal to be deciphered as either 1 or 0, then all the bits will get through the same.”
    This is hence not true. The signal transmitted can be absolutely perfect, but because noise is random, the received signal may mostly be fine but for a few bits. Noise happens all the time in our environment, and comes from all sorts of sources, both natural and man made.

  9. #9 by Aluwolf on January 4, 2011 - 11:41 am

    yes but any hdmi cord is set a standard, meaning all hdmi cords will work as long as they arent too long. An expensive HDMI cord may give you better reception at longer distances, but normally it just isnt need.

  10. #10 by Quin on June 20, 2011 - 2:44 am

    Aluwolf, shut up. Want some advice? Read the whole conversation, and you might consider not commenting again :) …just for your sake…your gna loose against the guy, you cant argue, he will always win ;)

(will not be published)

  1. Does the quality of HDMI cables make a difference? | The Hi-Fi Page
  2. visit the following page
  3. Jocelyn
  4. wooden triple bunk beds