Archive for January, 2010

The Rack – Design Complete

I’ve done a good amount of research over the last week. Obviously the key design requirement for a rack is to have super low vibration at any point that is contacted by the equipment. I.e. the shelves. There’s a lot of junk on the internet about what makes a good rack, but from a more scientific perspective I think it comes down to the following factors:

- High Mass

- Some form of damping

- Unit stability

- A low structural resonant frequency

- Structural strength, yet at the same time achieving the above

- Level adjustable

To control vibration of the unit as a whole, the first 3 points apply. To control internal resonance, which is by far the most important aspect of reducing vibration, all first 5 points apply. Level adjustability is simply to help with CDs and LPs spin on a horizontal plane.

So I think I’ve come up with something like this:

- Minimalist design (since my skills in metalwork are limited)

- 65mm mild steel columns, sand filled do deaden vibrations. I believe steel is more resonant than aluminium, but thick extruded aluminium is hard to find and difficult to weld. It’s also more difficult to finish, although theoretically I could leave it unpainted. Unfortunately my confidence in welding limits my desire to use it.

- Thick MDF shelves (acoustically dead)

- Three feet design

- Vibration isolating adjustable feet (I’m able to get some from my dad, who works in precision manufacturing. They have feet for heavy machinery which are not only completely adjustable, vibration isolating and limiting, they can handle a huge amount of weight).

I’ve been thinking about how I was going to couple the shelves to the rack. The internet tells me to use spikes. They’re the most popular form of contact for any “high end” rack shelf. But I’m not really convinced that it’s a good idea. Ideally with a vibration reduction system you need not just high mass but good damping. If the shelf is much less resonant than the frame (which MDF definitely is), it’s much better to damp the coupling layer rather than fix it. You’d still want the frame to be as unresonant as possible, but it’s never going to be able to achieve the natural deadening properties of wood.

Anyway, the drawings are done. But I’m not going to post them here as my artwork skills are horrible. Going out to buy some materials now. Back later…

  • Share/Bookmark

No Comments

Analogue vs Digital, the Great Debate

The analogue vs digital debate has been around for decades now. And there’s some promising news up ahead for the digital generation.

Since the advent of the T amp, digital amplifiers have really taken a step forward and progressed to proper audiophile territory. Never before have enthusiasts seriously considered class D amplifiers in the same league as class A, or even AB. Now, even the top brands are turning towards digital amplification and it’s not looking to stop. Given the massive power and cost savings, analogue amplification is finally at a stage where there’s serious risk of obsoletion (well, to some extent anyway).

But amplification tends to be the simpler (you can murder me later for saying this) stage in the sound reproduction chain. What about storage? Purists are still going at vinyl as hard as ever, even with the availability of what I’d consider “generation 2″ of digital audio storage in the form of SACD and DVD-A.

More surprising still, many purists are still sticking by a quality CD. (Well, perhaps this isn’t so surprising. Quality sometimes comes from quantity rather than technology. The argument is if we produce enough of it, for instance vinyl and CD, we get really good at mastering and recording for these mediums.)

So what about digital sources “gen 3″?

Blu ray certainly seems to be the next step. On paper the figures pack a punch. No longer can analogue worshippers make the “it’s just guessing” argument. The bit rates and resolution is now monumentally high, especially compared to CD. I must admit, upon my first listen it did sound like it could be the next revolution. A real revolution, that is, like the tape to CD revolution.

But then I had a thought about my experience over the years and how new technology has changed the audio industry.

Personally, I’m not certain we’re arguing about the right thing. For me, audio is, and was never about resolution or clarity. From the late 1950s onwards, we had more than enough technology to produce immensely realistic and detailed recordings. I still don’t think our ears discriminate sound by its accuracy, but much more by how it’s recorded.

In the heat of the analogue vs digital debate and in the wake of the digital blast-off, I really think we’ve forgotten about the fact that sound still cannot be recorded in same way that we hear things. Even the slightest microphone positioning mistake can greatly affect the quality of a recording. Yet, I’ve not read a single blog or forum post where someone has raved on about such an key stage in audio reproduction. There are endless posts on “digital cables vs analogue” and “silver vs copper”, yet very little on recording, microphone selection, placement, mastering techniques, speaker placement, room setup…just to name a few. We’re putting way too much emphasis on one thing, and ignoring the rest of the system.

Yes, vinyl does sound different to CD, and Blu Ray is unquestionably better than both. But I honestly believe that to achieve realism, you would need to consider many more factors than just the one thing.

  • Share/Bookmark

No Comments

Home Made Hi-Fi

I’m a huge fan of home made stuff. There’s something special about making something from scratch, knowing where every piece sits, how everything works, and if you’re good at DIY, that the result is a solid piece of craftsmanship that can rarely be found at any reasonable price in a store.

I’ve recently decided to take on a project to build a hi-fi rack, given my rather large amount of leave available in Jan. I’ve got a reasonably good turntable setup which is lacking a place to live. I’ve also heard about how racks are supposed to affect sound – something I’m not entirely convinced about. I’d like to test this theory and build something that’s worthy of proper audiophile status without spending a silly amount of money. I think it can be done.

Meanwhile, this is a website of a guy who does some amazing stuff. I’m a rather huge fan:

http://www.humblehomemadehifi.com/

Have a look at the way he designs and constructs his speakers. Amazing stuff.

  • Share/Bookmark

1 Comment

The Greatest Lie of the Digital Age

How often have you heard this?

“It’s digital, so it’s 1s and 0s. That means there can’t be any errors or distortion. You either get the signal perfectly or you don’t.”

For someone who knows little about electronic systems, this would sound perfectly logical and true. In reality it’s a rather naïve way of thinking and it completely over simplifies the complexity of the world (and of physics). To show you why this statement is a lie, we need to go slightly into electro-magnetic physics. Don’t worry, not too much, but just enough so that we can see how digital transmission, and in particular, a thing called “channel coding” (and digital coding in general) works.

The Digital Signal

The digital signal is often classified as an array of 1s and 0s. This is true in the logic sense – we represent them in 1s and 0s. However they’re merely symbols. We could easily represent them as Xs and Ys or apples and oranges. In the real world, the digital signal is encoded and transmitted as a variety of alternating bits. This is usually a “high” and “low” signal.  The signal is always encoded to minimise the probability of error for a given channel at its signal power limit. With all signals, there is a chance of noise (because, as you would of course know, anything in the real world is essential analogue, not digital).

Bandwidth

When we picture the transmission of a digital signal, we usually think of it in the way of the following:

Theorical Digital Signal

Theorical Digital Signal

This is true in theory, and for higher level applications such as computer programs, controllers, etc. this is enough. In real life, the same digital signal (especially during transmission) looks more like this:

Real Life Digital Signal

Real Life Digital Signal

Why is this? Because unfortunately the real world isn’t as simple as on and off or high and low. Almost every communication channel (e.g. a cable, optical system, radio transmission) has a finite bandwidth. What is bandwidth? That’s the maximum speed that we can transmit data. I.e. there’s a limit to how fast we can switch from 0 to 1 or 1 to 0. When we put a signal with more bandwidth than the channel can handle, the signal will not come out the same shape. How differently the shape changes depends on the nature of the system and the signal. However, take it from me that when you put in a perfectly square digital signal like that of the first diagram, you’re most likely to get a real life signal like the second diagram.

The Bandwidth vs Cost Tradeoff

Engineering is often more about compromises than anything else. While we can build every network, cable, radio transmission mast with a huge amount of power to increase their bandwidth, it is not efficient. Generally there’s an acceptable number of errors we can take for a particular signal. For example, most people would be satisfied with a TV signal that works 97% of the time for 97% of the population. To achieve the extra 3% coverage, by nature, may cost double the investment in resources. Hence, it is generally not worth spending so much money to get perfection.

A digital signal, therefore, is usually designed to be optimised in terms of resources used. A good designer of a product would attempt to use the least amount of resources to achieve a satisfactory outcome. So if we talk about bandwidth, it would mean that we would use the minimum bandwidth to still achieve signal recognition to a satisfactory level.

The Eye Diagram

Here’s a concept that may interest those with a deeper understanding of probability and physics. When we pass a digital signal into a bandlimited channel, the signal is distorted. The receiver system will attempt to “guess” what the transmitted signal is. Let’s use a numerical scale where -1 is “0″ and 1 is “1″. How would the system guess? Simple: if the received signal is more than 0, we assume the transmitted signal is “1″. Otherwise if it’s any negative number, we assume the original transmitted level was -1, and the transmitted signal was “0″.

So in this instance, as long as no signal is distorted to the point where the signal crosses 0, we will still get an error free transmission.

The eye diagram is a diagram where multiple digital bit signals are laid upon one another. We can then see the variance in signal levels that occur to each bit, and see where they vary. The “eye” is the gap between the 1 and the 0. The gap therefore needs to exist in order for the receiver to be able to detect whether a signal is 1 or 0.

Eye Diagram

Eye Diagram

Here we have an eye diagram. We can see the most probable bit locations as the darkest areas. The eye is the two gaps which is formed by the two bit signals shown. As the signal becomes more distorted, the width of the signals increase (probability of error increases) and the eye starts to “close”. The point where the eye is completely closed is the point where the digital signal is distorted to the point of pure noise (no useful information can be extracted).

Most systems are designed with eye diagrams similar to that above. There’s little chance of error. However, errors still occur. We can still see a few bits getting rather close the center of the eye. These are the bits which are likely to cause error bits.

What Does All This Mean?

This means that any commercial product (i.e. a product designed to make money) will be designed to optimise for cost as well as performance. If we talk about say, a HDMI cable, we would expect that any cable more than a very short length will expect to have a more than negligible probability of error. Generally speaking, with most digital signals there’s always a chance of error. When an error occurs, it doesn’t mean that you won’t get a signal at all. It just means that there’s an error. These could be single bit errors which may be visible and uncorrected, or may be corrected depending on the mechanism. However errors exist, even though you may not know about them. Digital is far from perfect.

Going back to the HDMI cable, let’s assume that the cable is now quite long. This means that the bandwidth is reduced. Performance is reduced because bandwidth is reduced and more errors can now occur. A badly made HDMI cable therefore shows much more significant errors. (People say that digital signals are perfect, yet it’s no secret that long HDMI cables can have problems. Why doesn’t anyone question this?)

Channel Coding

Here’s more proof that digital is not perfect. Channel coding is a study of the logics behind digital transmission. How do we best encode digital signals to ensure that the transmission has the best probability of success, and uses the least resources?

Channel coding exists everywhere. Let’s talk about one common place you might see this in action – your CDs. Most people think about CDs as the forementioned “perfect” and “distortion free” transmission. But think about it: tiny specs of dust, scratches, imperfections exist on every CD. The bits recorded on a CD are also tiny, and a laser will have no chance in telling a piece of dust for something else. This means that no CD will ever play without distortion at the bit level. I bet most people don’t know this fact.

However, there’s good reason why you may have not known about this. The guys who made the CD, Blu Ray, HDMI, etc. were pretty smart. They used a number of methods of digital transmission called “channel coding”. In a CD, this consisted of interweaving bits of information, optimising the type of transmitted information, and using checking bits to ensure that when errors did occur, there was a high chance that it was corrected and/or a close guess was used.

How does this work? There are a whole heap of methods they use and I won’t even scratch the surface. But let me demonstrate one…very basically. Let’s take for example, a number we need to transmit over a channel. Let’s say 17. In binary, 17 is 10001. Let’s assume a one bit error occurs. Because there are 5 digits, we could get 5 possible errors – 10000, 10011, 10101, 11001, 00001. Notice what these numbers equal: 16, 19, 21, 25, 1. A 1 bit error could result in a slight distortion (16 instead of 17), or a huge distortion (1 instead of 17). In fact, if transmitting numbers in this way, the magnitude of distortion varies with the number of digits transmitted. If we transmit a 256 bit number, we could get a massive error due to just one bit being incorrect.

The engineers behind this technology realised this and decided that it was better to encode bits which were close together (read about Hamming distance).  What they did was formulate a number of bit sequences which represented the transmitted data, where the sequences were different in the same way that the data was different. For example, let’s arbitrarily represent 16 as 1011, 17 as 1001 and 18 as 0001. Notice that the difference between 17 and either 16 or 18 is 1 bit, and the difference between 16 and 18 is two bits. They are of the same difference as that between the numbers themselves.

What does this mean?

This means that when a spec of dust distorts a bit on a CD, it may mean that instead of 25355, the CD reads 25356. During a track, you will probably not be able to hear this, but the distortion will occur much like analogue distortion will occur. Digital simply means that there is less chance of this happening.

Digital is undoubted much better than analogue in many ways, and without it we would not live in the same world that we do today. However, the technology behind digital is much more analogue than you think. Digital is far from perfect – don’t assume that just because something is digital, it would be distortion free.

  • Share/Bookmark

14 Comments