I have yet to have anyone successfully explain to me why all significant jitter cannot be removed by building an appropriate play out buffer just prior to the DAC? We do this all the time in the internet-working world when we build synchronous TDM circuits/services over a packet based IP network.
Scott
In general, the DAC circuits will be impacted by at least these issues:
*Jitter from the clock oscillators
*Jitter from the driver/output stage. One easy fault is the rise and fall time will never be exactly the same, and this will cause jitter, and is even worse when data is sensitive to both edges.
Other issues that impact the dac and mostly an additive issue from the same issue as the DAC.
*SPDIF is a source synchronous signal. i.e. clock is built into the the data wire, so you can get jitter based on the the source oscillator and driver
*i2s can be an improvement since the clock is a second wire and data is captured on consistent edges for a given channel.
*reclocking techniques that utilize a synchronizing fifo and buffering can improve jitter but you will always have jitter from the oscillator.
TDM... clearly you don't own a cellphone

.. TDM is just a way to share different types of information on a given wire. Most networking protocols can either handle a dropped packet or latency costs to resend the data, but won't address jitter within a dac.