Hi Steve,
I got a further clarification on this so the better way to express it is you can 'attenuate' jitter coming in but you can not 'eliminate' it. So the more jitter you have coming in the more you have coming out the other end after attenuation. That's why less jitter in still matters.
james
James, the word attenuate implies that buffering is an analogue process... i.e. it is removed by x% or x db. However buffering is a purely discrete process. If the jitter of the input signal exceeds the jitter tolerance of the buffer then in theory some jitter would remain - however a modern dac is very tolerant of this. It would be amazingly rare that the jitter would exceed this tolerance unless the source was faulty (and jitter on any decent device is usually in sub nanosecond range).
The only reason I point this out James (understanding that you are the messenger here) is because it seems to directly contradict current graduate level electrical engineering theory.... In a phase locked loop, which is then buffered, the jitter will only be an issue if a sample is offset from its time position so much that it is attributed to a different time slot, or is treated as 2 samples.
Of course, if I was the suspicious type, I could suggest the audio industry wouldn't want to admit this due to the implications it would have for the market of transports... Which is understandable as most modern streaming devices (logitech squeezebox for example - measured at 50ps) by nature, have extremely low jitter since they have a far more stable source.
If possible, could you please ask the engineers what they mean by 'attenuating jitter' as part of the digital buffering process?
P.S. I found another good conceptual resource on multi stage jitter reduction at
http://www.anedio.com/index.php/article/multi_stage_jitter_reduction - essentially what it says at the end, by using a proper buffer the only jitter after the buffering, is the jitter inherent, or introduced by the clock in the DAC itself. i.e. in a nutshell Stage 1: Filter out unwanted noise to make sampling easier. Stage 2: Sample that filtered signal Stage Stage 3: From that buffer, reproduce the signal aligned to the new clock (retimed). The remaining jitter should purely revolve around the DAC clock and the jitter in the output buffer -
independent of the input buffer.
At stage 2, it either got the samples right or it didn't... After that, any jitter present is what is created by the DAC. If it got the sampling wrong at stage 2, then one of the samples will be offset by one, or completely duplicated (in theory the other case is that it could have been entirely missed). Now implementations of a DAC will vary from manufacturer to manufacturer, but there is only so many ways you can buffer an output and the basics from above will remain.
Back to the original question, my understanding is that the jitter with HDMI is so high and so variable that it can be nearly impossible to design DACs tolerant enough to handle it well. Not that it can't be done, but doing so requires significantly more R+D