This stuff is fascinating. What is it about about digital signal processing that degrades sound quality? I use different digital connections at home, and the diffrences are very apparent, so all digital data is clearly not the same. For as long as I can remember people have associated sound quality with jitter, but what I am wrestling with is the tangiable consequence of excessive jitter. In what way does the final analogue audio signal differ when there is more jitter?
Jitter is a time-domain perturbation of a digital stream, but it seems to me that everyone assumes this translates to mis-timing in the analogue output. "The right amplitude at the wrong time is the wrong amplitude".
As I understand it, the magnitude of jitter is typically a fraction of the duration of a bit. There are 16 or 24 bits that go to make up just one amplitude sample, and at least two samples are needed for each cycle. With lower frequencies and higher sample rates, there will be many samples per cycle - perhaps fifteen, for example.
Now if jitter is equivalent to say, a tenth of a bit duration, then the consequential time perturbation of the associated sample would be a fifteenth of a sixteenth of a tenth of a bit duration. So the timing for one of the samples in the cycle in this example may be 1/2400 of a cycle early. And the timing for the following sample may be the same, or correct, or 1/2400 late. Of course there are several samples in each cycle, and because of the nature of jitter, these will tend to average out over a period of time.
The consequence of this is that jitter may be considered, like other forms of distortion, to add an unwanted signal on top of the desired signal. It's a phase distortion rather than an amplitude distortion, but it still corrupts the signal. It appears to me that this distortion would be small, but we may be very sensitive to it, and we do need to minimise it.
Does that describe what people consider to be the mechanism for corruption of audio by jitter?
Nick