I have seen many times people thinking that a signal has better quality if it's in digital form rather than in analog form, yet this is false. When you convert an analog signal to a digital format, you *always* lose quality because you are converting a continuous signal to a discrete one. Let's see a little example:

Suppose we want to digitize a 20KHz analog audio signal so that we can store it on disk. Following the Nyquist theorem, we must take a minimum of 40K samples per second; this guarantees that we can recreate the original signal in its exact form (i.e., maximum quality).

The lost of quality arises at this point: how can we represent each sample without losing precision (assuming, of course, that the sample is accurate)? Samples are measured in a continuous scale, but in order to store them we must round their value up or down to a discrete value we can represent (think about representing a real number, which leads to the same problem). We'd need infinite precision to store each sample in its exact form, which means infinite disk space. Not possible.

Note that the example above assumes a PCM modulation technique where each sample is codified and stored in a fixed amount of bits. As you can imagine, any other technique will suffer from this problem.

This post comes after today's SPD class, an optional course I'm taking this semester at university. It's all about the low level details of public data networks. Amazing stuff, IMHO ;-)