Slave/Master relationships in the audio world

0 Members and 1 Guest are viewing this topic. Read 6101 times.

chadh

Slave/Master relationships in the audio world
« on: 15 Apr 2005, 02:19 am »
My apologies: this is kinda long.  Mostly because I understand so little...

I visited this page today: http://www.lessloss.com

This is the second time I've read their information, and the second time that it struck me how sensible their ideas seemed.  On the other hand, in technical matters I clearly qualify as a moron.  I was hoping, then, that someone might be able to set me straight as to whether this deal they make about which clock should be master makes any real sense.

As far as I understand it, they argue that the clock in your transport should be disabled, and instead a clock in your DAC should do all the work.  This way, a horrible, jitter-ridden stream of data cruising down your digital cable from transport to dac will have no effect at all on playback, because the data is all synchronised with the dac clock once it has arrived at the dac unit.  The only jitter that effects playback will enter between the clock and the dac chip, which are conveniently snuggled up right next to each other.  Making your transport "slave" to the dac clock, then, reduces jitter to almost negligible levels.  This seems to make a lot of sense to me.

This is contrasted with "asynchronous reclocking", or some-such description, in which both the transport and the dac have operational clocks which almost surely are not perfectly synchronised (leading to distortions because of the asynchronicity), and to the standard arrangement in which the transport clock is the master, so that all the jitter introduced in the journey from transport clock to dac chip is preserved.  These criticisms also make sense to me.

Interestingly, for these people the issue of oversampling or non-oversampling becomes a secondary issue.  They opt for oversampling, but suggest somewhere that oversampling might increase the detrimental effects of jitter in a typical jitter-ridden system (which would explain why a lot of people prefer non-oversampling dacs to oversampling ones, even though there are good technical reasons why non-oversampling ones should sound, and always measure, worse).

So, this raises a bunch of questions for me...

1.  Does their argument really have merit?

2.  With all these people modifying transports and marketing dacs, why don't we see them disabling the transport clock and installing a master clock in the dac?  I've certainly seen people hoping for a reclocking device in the dac, and plenty of mods are available that simply add a clock to the dac - but I get the impression that this would correspond to the "asynchronous" arrangement described above.  The lossless people need to introduce a second digital cable between transport and dac (to carry clocking information from dac to transport) - is this the really prohibitive part of the arrangement?

3.  I keep seeing vague references from people about installing a reclocking device in the dac, and then hooking it up to the PC which streams data through a USB connection, and magically delivers a jitter-free digital system.  Are they right?  Is this because the USB connection is effectively "untimed", so that the dac clock is the only one in the system, and therefore master?  Or is the USB stream timed, which would leave us with the asynchronous reclocking situation once again?

The thing that gets me is, if the lossless people are right, then it seems like a pretty easy process to have your dac enslave your transport.  Furthermore, if you're going to do this, you may as well use the cheapest transport you can find, because all of its jitter-reducing qualities are irrelevant.  And you can also save a pile of cash on the digital cable - its quality is irrelevant now.

I hope someone can show me the error of my (their) ways!

Thanks,

Chad

panomaniac

Slave/Master relationships in the audio world
« Reply #1 on: 15 Apr 2005, 12:49 pm »
Very interesting subject, Chad. Thanks for posting and for the link.
While I'm no expert on the subject, that won't stop me from joining the discussion. :)
I read the Lessloss stuff and it makes plenty sense. However, there are a few things that they don't mention.
 CDs. The transport is not the only source of jitter, the CD medium itself can, and does contain jitter. Nothing is perfect. I don't see how their master/slave solution would eliminate jitter coming from the CD medium.

On the site they talk about using a cheap drive slaved to the DAC clock, and that the cheap drive will work just as well as an expensive one. Maybe. But that means that the cheap drive has to be able to follow the master clock nearly perfectly. Can it? Can a better drive do it better?

Also, if you read some of the good sites that review and test CD/DVD burners and drives, you will see that not all are created equal.  Tests of damaged discs show which drive can read thru scratches better than others. Certainly this must have an effect on error rate.

I was reading the manual for my Plextor DVD burner that has software for testing all sorts of parameters. Jitter, optimum pits vs. lands, focus, error rates, etc. All that must make a difference. I can't believe that all drives can extract exactly the same digital "file" as stated on the site. Maybe off a perfect disc, maybe.

That brings us to USB. Since the bitstream coming from the USB can be buffered, all jitter can be eliminated. The DAC clock is now the master, because it calls for the information form a memory buffer, not a spinning disc. USB (and other protocols) can resend lost or corrupted data, too - something a CD won't do. But I don't know if USB DACs make use of this feature.

So it seems to me that the "Lessloss" approach is a very good one, but it can't be better than an asynchronous buffer method.

Expert opinions?

JoshK

Slave/Master relationships in the audio world
« Reply #2 on: 15 Apr 2005, 02:11 pm »
I think you touched upon the answer to your question on why passing the clock back to the transport as slave isn't done more often.  We the consumer are as confused on the matter as we can be.  It doesn't seem that all of these expert modders are offering any advice on the matter as well, or less they loose clientele for transport mods.  Ok, that is a bit cynical, but probably not far off from the truth.

This idea has been kicked around a bit in the DIY community.  I have seen some DIYers add a TentClock to their dac and then pass the clock back to the source via BNC cables.  It struck me as genius, since I didn't know this was a plausible option.  Many of us here have long cursed the S/PDIF format as being fatally flawed but few have offered any resolution as to what to do about it.  Might this be the answer?

Professional gear (Apogee, Lynx, et al) has had the ability to pass the clock to another piece of gear for a while and a few really hi-end digital manus are starting to catch on (dCS, ?? Jason).  I personally am going to have a look at the threads on DIYaudio in regards to TentClock as I seem to remember something in those threads about passing clocks.  

Of course, all this while you can just put your DAC inside the transport and forget all about jitter, except CD medium created jitter itself.

ctviggen

  • Full Member
  • Posts: 5251
Slave/Master relationships in the audio world
« Reply #3 on: 15 Apr 2005, 02:21 pm »
Somewhere something has to be "timed" in the sense that you have to know when to sample in order to determine a 1 or a 0 and you have to develop a 44.1kHz bitstream.  As for feeding the clock from your DAC to the player, this has problems too.  Anytime you feed a clock from one place to another, you introduce delay. Then you introduce reflections if the input isn't impedance matched, etc.  Clocking is a very hard subject.

chadh

Slave/Master relationships in the audio world
« Reply #4 on: 15 Apr 2005, 03:25 pm »
Thanks for the comments.

As far as transports reading data perfectly - the point is well taken that a good transport is better able to read a damaged disc, which still generates an argument in favour of superior transports.  I think.  It still confuses me that crappy cd drives in my PC are able to extract all my data perfectly, without having to resort to spending thousands of dollars on the transport.  And while some data discs are unreadable in my PC, I don't really see an industry providing all sorts of exotic transports to reduce the frequency with which discs fail to be read.  I mean, I guess all the manufacturers always try to improve their transport mechanisms, but not with the same passion that audio people do.

I think panomaniac answered the USB question:  it sounds as though the buffering process ensures that timing from the transport is irrelevant, so the only timing issue occurs at the dac clock, which sends the data to the dac at the precisely timed rate.  No jitter!

Josh must also be right, that putting dac and transport in the one box avoids these problems.  But it likely creates others - again, for what it's worth, the lessloss people claim that separating the units is vital for electromagnetic shileding purposes, or something like that.  But it seems to me that, if jitter between separate transport and dac were eradicated, then the question of separate transport and dac vs. the one box solution would become religious, sort of like the choice between separate pre-amp and amp, or using an integrated.

I think it would help if I knew more about this whole clocking process.  So I'll ask some more questions.  

ctviggen said "Somewhere something has to be "timed" in the sense that you have to know when to sample in order to determine a 1 or a 0."  Where does the sampling occur: in the transport at the data retrieval stage, or in the dac?  I guess it's in the dac (that's why we have upsampling dacs and the like).  So, in the typical system, there are actually two things that are timed, right?  We use clocks to determine the rate at which information is pulled off the disc, and the rate at which data is fed into the dac?

The standard approach has a clock at the transport timing the data retrieval precisely.  Then, two things are sent to the dac: the data from the disc, plus additional information about timing from the clock. Bad things happen on the way, and we get a jittery mess.  The dac takes the timing information from the signal, and uses that to determine the source data from which it should sample at any given moment.  If it is jitter-ridden, then potentially the wrong thing is sampled at any given time.  Does this sound right?

If the dac clock is master, though, it sends signals to the transport and tells the transport when to retrieve data.  That means one thing is sent from dac to transport.  The transport needs only to send one thing back to the dac:  and that's the source data.  This arrives at the dac, and what's more, the dac knows exactly the rate at which the data was retrieved because it determined that timing, AS LONG AS THE TIMING INFORMATION WAS RECEIVED ACCURATELY AT THE TRANSPORT.  So, when ctviggen says

"Anytime you feed a clock from one place to another, you introduce delay. Then you introduce reflections if the input isn't impedance matched, etc. Clocking is a very hard subject..."

and panomaniac says

"But that means that the cheap drive has to be able to follow the master clock nearly perfectly..."

are these just warnings that the clocking information sent from dac to transport is subject to exactly the same transmission problems as source data transmission?  This seems like a plausible claim - but then again, it seems like instructions regarding timing of data retrieval is a much simpler animal than source data (especially as it remains constant while playing), so potentially could be transmitted with fewer problems than source data.  

Hmmm...that should make it clear that I have no technical knowledge.

Chad

PhilNYC

Slave/Master relationships in the audio world
« Reply #5 on: 15 Apr 2005, 03:45 pm »
Here's a description of the Master Clock mode of the dCS Elgar from the dCS website:

Master Mode clock operation
One of the problems of using the standard AES/EBU and SPDIF digital interfaces between a CD transport and a D/A converter, is that the D/A converter has to extract the clock it needs to synchronise its operation from the incoming data stream. This is in itself not a difficult operation. The difficult part is to obtain a stable clock, as the incoming stream will always contain a certain amount of instability, which is referred to as jitter. There are a number of possible sources of jitter. These can include:

• Instability in the digital source's (e.g. CD transport) clock
• Insufficient bandwidth in the digital source's digital output circuitry
• Poor quality or unsuitable digital cable
• Noise and interference
• Insufficient bandwidth in the D/A converter's digital receiver circuitry

The solution to this is not to try and extract the clock from the incoming data, but to drive the transport and the D/A converter from the same clock source, which ideally is located within the D/A converter itself. This is exactly what the Elgar Plus' Master Mode function facilitates. In this mode, the Elgar Plus outputs an ultra-stable clock signal at 44.1kHz, which when used with the dCS Verdi SACD/CD transport or any other transport able to lock to such a signal, enables an even higher level of fidelity to be achieved than when the standard digital interfaces are used.

The result is more precise stereo imaging, better resolution of low level detail and better defined bass.

It should be noted that this method of synchronisation is not a new technique. It is standard practise in the professional recording environment and we have been using it to great effect on our professional products since 1988.

chadh

Slave/Master relationships in the audio world
« Reply #6 on: 15 Apr 2005, 05:46 pm »
Thanks Phil.

The dCS quote sort of begs the question, though:  if it's something that has been used by the professional recording industry for so long, why has it so rarely been implemented for consumer audio?

Well, I suppose I know the answer: most dacs get used with CD players as transports, and the CD players already have clocks.  It's messy to disable the clock and enslave the CD player to the dac. This means that most of the time, dacs are sold to people who will be using a clock from the transport, so including a clock in the dac is redundant, and costs a whole lot of money.  So manufacturers don't produce dacs with clocks.  And given that dacs aren't manufactured with clocks, there's no market for transports without clocks.

dCS can get away with it by marketing the transport and dac together, and seemingly building in some extra circuitry to enable you to switch on/off either of the clocks.

Still, given that some of the modders around here are installing clocks in their modified dacs, AND they are modifying transports, might there be a market for disabling the transport clock and enslaving the transport?  Or is there some significant problem with implementation?  Or do people just think it provides little or no improvement?

Chad

thayerg

  • Jr. Member
  • Posts: 132
Slave/Master relationships in the audio world
« Reply #7 on: 15 Apr 2005, 08:29 pm »
This seems like one more example of a manufacturer exploiting a (hypothetical or real) technical issue to create a selling proposition, unique or otherwise.

I wonder whether dejittering the digital signal accomplishes the same benefit as slaving the transport to the DAC clock. It's a whole lot  easier and you don't void any warrantees.

I once had an audio salesman try to sell me on a pricy digital interconnect because according to him the cheap ones add jitter to the signal. I would have gotten angry but he probably believed it.

But I'm the type of person who buys the cheapest transports I can find with digital outputs, and I let the jitterbug sort it out. I've borrowed muli kilobuck transports and they sound exactly like my hundred buck Sony dvd does.

chadh

Slave/Master relationships in the audio world
« Reply #8 on: 15 Apr 2005, 08:53 pm »
Honestly, I'm quite open to the argument that this whole idea of slaving the transport is a crock.  But I don't think that the jitterbug solution can solve all the problems.  

If I understand it properly (which is quite possibly not the case), a jitterbug will simply replace the transport masterclock with another masterclock which is still distant from the dac.  Now the jitterbug sends the data AND timing information to the dac, along a digital cable with all the problems you usually have.  Granted, the jitterbug doesn't have all those moving parts, and probably has a superior clock to the one in the transport, but it will still send jitter to the dac.  No?

Actually, if you tell me that I'm wrong, I'll be elated.  I use a Monarchy DIP between my rotel cdp and Bolder smART DI/O.

Chad

thayerg

  • Jr. Member
  • Posts: 132
Slave/Master relationships in the audio world
« Reply #9 on: 15 Apr 2005, 10:24 pm »
It will send a jitter-free signal to the DAC. The jitterbug uses a PLL circuit to derive the clock timing and then resyncs the data and the clock. At least that is how I understand it. The PLL is like a flywheel whose known speed provides stable timing.

I was far from convinced until I got a jitterbug which has a bypass switch and one can definitely hear the difference--everything sounds more coherent and natural. I wouldn't trust my ears to remember the difference if I had to physically swap cables.

The jitterbug in question, a Meridian 518 (which does a bunch of other stuff besides jitter correction, mostly too esoteric for my pretty little head), has four digital inputs and so I was also able to a/b different transports. No difference at all. Somestimes a small difference if I bypassed jitter correction but nothing worth obsessing over.

To put it another way, the effect of jitter correction is much more significant than differences between the digital outputs of different transports or players, no matter how low or high end. Which is where my experience squares completely with the lessloss assertions.

Ulas

  • Jr. Member
  • Posts: 116
Re: Slave/Master relationships in the audio world
« Reply #10 on: 16 Apr 2005, 12:14 am »
Quote from: chadh
I was hoping, then, that someone might be able to set me straight as to whether this deal they make about which clock should be master makes any real sense.

Slaving the CDT from a master oscillator in the DAC can be beneficial. It’s not done in consumer audio because it requires additional complexity and cost (cable, connectors, and circuits to transmit and receive the clock) and there are no standards. There are nearly a dozen different clock frequencies used in different CDPs today. Even if there were a standard method of slaving the CDP, you can bet some/all manufactures would cut corners and provide sub-standard implementations making the CDP/DAC interface less reliable than it is now.

Ulas

  • Jr. Member
  • Posts: 116
Slave/Master relationships in the audio world
« Reply #11 on: 16 Apr 2005, 12:24 am »
Quote from: JoshK
Many of us here have long cursed the S/PDIF format as being fatally flawed but few have offered any resolution as to what to do about it.

There would be nothing wrong with S/PDIF if it were used as intended and if there were an organization that enforced the standards. When S/PDIF was developed, all audio DAC chip were mono and had a parallel interface. There was a pin for each bit and one for the word clock that initiated the D-to-A conversion. A multi-wire interface was undesirable so S/PDIF was created to carry the word clock and data bits on a single wire.

As designed, the embedded clock was intended to only extract the data bits and shift them into a serial-to-parallel shift register, which fed some latches, which fed the parallel DAC chip(s). The word clock is transmitted as the sub-frame preamble that marks the sample boundaries. Paraphrasing from the CS8414 data sheet:
Quote
The word clock (FSYNC) is extracted at times when the intersymbol interference is at a minimum and it provides a sample frequency clock that is as spectrally pure as the digital audio source clock.
As I read it, FSYNC is about as good as it gets, all things considered.

Then came the stereo DAC chip and the serial interface, which dramatically reduced parts count and cost. Somewhere along the way, the word clock was demoted to a word selector and the bit clock (SCLK) was used to initiated the D-to-A conversion. That lead to the present day circus with jitter boxes, reclockers, and other things to make the bit clock something it was never intended to be.

If you think S/PDIF sucks, take a look at USB…It doesn’t even have a clock! Every millisecond the host send the device bunch of samples in a burst and the device is supposed to play those samples, how every many it received, during the next millisecond. In the case of non-integer samples rates, such as red book CD’s 44.1K, some bursts will have more or fewer samples than others. The USB transmission protocol used for digital audio has no provisions for error detection, correction, or retransmission. Any transmission error results in a 1ms drop-out. When S/PDIF detects a transmission error it repeats the last sample if the DAC chip is unable to interpolate through the error.

chadh

Slave/Master relationships in the audio world
« Reply #12 on: 16 Apr 2005, 02:14 am »
Wow.  I knew somebody would know what the deal was here.   Thanks Ulas.  I'm not sure how much I understood...but every little bit helps.

One of the things you said makes me more confused though.  You lamented the absence of a clock in USB transmission, and suggested that USB transmission then was even worse than S/PDIF.  Are people misguided then when they talk about adapting their PCs for audio use by using USB to connect the PC to a dac?  The comments I see suggest that this provides for perfect and jitter-free transmission of source data to the dac.  Is this not true? Perhaps the problems you describe are not technically "jitter", but they certainly seem to have timing implications for playback.  And how frequent are the errors that the USB protocol does not detect, correct or retransmit?

Chad

Ulas

  • Jr. Member
  • Posts: 116
Slave/Master relationships in the audio world
« Reply #13 on: 16 Apr 2005, 04:37 am »
Chad,
All clocked circuits have jitter. The bit clock coming from an S/PDIF receiver, such as the CS8414, is the output of a VCO that is locked to the phase transitions in the S/PDIF data stream. Some more modern S/PDIF chips are phase locked to the sub-frame preambles, only. That method is said to eliminate the data modulation effects that can be found in the earlier designs. USB DACs use a VCXO controlled by a microprocessor that calculates the sample frequency based on the number of samples it has in its buffer. It’s hard to say which one has the lowest jitter because it depends the time span you use in your comparison.

Short term, the USB method is superior because it uses a local, crystal controlled time-base for the bit clock and is not subject to the effects of a distant clock in the CDP and signals traversing a length of wire. But the USB DAC is likely to change its sample frequency every millisecond. Also, its ultimate, long-term time-base is the interval between data packets from the host. How accurate and jitter-free is the PC clock and what is the likelihood the USB controller’s access to memory to get the sample data will be hindered by other activity in the PC thereby delaying its communication with the USB DAC? (Have you ever tried to used a USB DAC on a slow laptop PC running Windows 9x with the USB port using a shared IRQ? I have and there was a 1ms dropout every 2ms!) I recall seeing the specified tolerance for the every-millisecond-handshake between the host and the device as being in the multiple nanosecond range but I can’t find it just now. I do know the specified phase noise for USB audio data is +/- one sample. Phase noise is another way of specifying jitter.

A transmission error, in either S/PDIF or USB, is very, very, very rare. It almost never happens, but when it does, a 1ms dropout is a lot more annoying than a single repeated or interpolated sample.

Even if you don’t understand the details, you should at least know that digital audio and its interfaces are not black and white or good and bad. When you read or hear, “USB DACS are error-free and jitter-free,” you know the speaker is trying to sell you something or is just blowing smoke.

chadh

Slave/Master relationships in the audio world
« Reply #14 on: 17 Apr 2005, 02:25 am »
Ulas,

Many, many thanks.  Your explanations have been enlightening indeed.

I shall be on my guard against smoke blowing at all times!

Chad

peakrchau

  • Jr. Member
  • Posts: 82
    • http://www.angelfire.com/ca/rchau/audio.html
Slave/Master relationships in the audio world
« Reply #15 on: 17 Apr 2005, 12:36 pm »
Hi Ulas,
USB information is a little sketchy at best and buried in the USB 2.0 spec. I've worked on some aspects of USB 2.0 at my day job but I thought it might be useful to clarify some of your points you made in your post:


Quote from: Ulas

....long-term time-base is the interval between data packets from the host. How accurate and jitter- free is the PC clock and what is the likelihood the USB controller’s access to memory to get the sample data will be hindered by other activity in the PC (i.e. contention)  


The long term jitter is also determined by the local clock providing the CPU data delivery subsystems can keep up. The packetized nature of USB is also used for telecommunications(all phones in NA)  and assumes that the excess bandwidth (480 MHz peak for USB 2.0 versus 2.8 MHz sustained for Redbook) can be used to realize a "hurry up and wait" protocol.  What this means is that data is "queued up" and rammed/stuffed to a USB FIFO in large chunks and then parcelled out by a local (jitter free) clock in the DAC.  This type of technology was too expensive when Sony/Philip invented CD playback in the seventies.


Quote from: Ulas

(Contention will) thereby delay its communication with the USB DAC? (Have you ever tried to used a USB DAC on a slow laptop PC running Windows 9x with the USB port using a shared IRQ? I have and there was a 1ms dropout every 2ms!)


For CPUs which have onboard video and sound, the memory and CPU subsystems will be taxed more and be may allocate less of the available resources to the USB subsystems. Running Quake while using a USB DAC is not a good idea. Theorectically, USB 2.0 could support about 170 redbook streams (480/2.8 ) while USB 1.1 could support about 4 Redbook streams (12/2.8 ). Your personal experience with USB dropouts may be widespread but PCs but is not a flaw in the USB itself.


Quote from: Ulas


I recall seeing the specified tolerance for the every-millisecond-handshake between the host and the device as being in the multiple nanosecond range but I can’t find it just now. I do know the specified phase noise for USB audio data is +/- one sample. Phase noise is another way of specifying jitter. ...
 


In USB a packet consists of about 500 data tokens. This amounts to about 1mS of data. The USB spec requires that local USB clock (receiver) and the slave (DAC) be within 500ppm of each other. Over the standard packet length, this amounts to the +/- one sample you mentioned. This has nothing to do with the received jitter as this will be "buffered out".

regards,
PeAK

Ulas

  • Jr. Member
  • Posts: 116
Slave/Master relationships in the audio world
« Reply #16 on: 17 Apr 2005, 04:41 pm »
Quote from: peakrchau
What this means is that data is "queued up" and rammed/stuffed to a USB FIFO in large chunks and then parcelled out by a local (jitter free) clock in the DAC.

First of all, in the real world there is no such thing as a jitter free clock. Second, the data packets awaiting transmission to the USB device are queued in the PC’s memory. The USB controller uses DMA to retrieve the data and send it to the device. That’s one place where contention for resources in the PC can disrupt the continuity of USB communication. Another is the USB host/device handshake. That requires hardware resources, such as free IRQs, and sufficient CPU cycles to complete the transaction in a timely manner. Your comment, “Running Quake while using a USB DAC is not a good idea,” proves my point.

Quote from: peakrchau
Your personal experience with USB dropouts may be widespread but PCs but is not a flaw in the USB itself.

True, but USB DACs are rarely used without a PC.

Quote from: peakrchau
The USB spec requires that local USB clock (receiver) and the slave (DAC) be within 500ppm of each other. Over the standard packet length, this amounts to the +/- one sample you mentioned. This has nothing to do with the received jitter as this will be "buffered out".

Latency in the timing of the 1ms packets is very different from clock accuracy (ppm). I don’t know what you mean by “received jitter.”

As everyone knows, audio CDs uses a 44.1KHz sample rate. The period of each sample is approx. 22.67573696 microseconds. For digital audio to work at all, the clock period used in the D-to-A process must be identical to the clock period used to create the audio samples. Any deviation means the resulting audio signal gets the right amplitude but at the wrong time. The result is similar to the audio signal having the wrong amplitude at the right time. In either case, the reconstructed audio signal differs from the signal represented by the digital data. In my book, that deviation is distortion.

USB sends audio data in 1ms chunks. For CD audio, it sends 44 samples in some packets and 45 samples in others. The ratio of 44- to 45-sample packets is such that the aggregate is 44,100 samples per second. Some USB DACs use PLL techniques to smooth out the differences. Others simply set the sample rate to play all the samples in the packet during the next 1ms. The sample period of a 44-sample packet will be 22.72us period while the 45-sample packet will be 22.22us. The difference is 505ns or about 2%. It’s not jitter in the usual sense of the term…it’s pitch instability. With phonograph records and magnetic tape it was called wow and flutter and 2% was absolutely unacceptable.

Some audiophiles obsess over a few picoseconds of jitter in S/PDIF, which, if your equipment is not broken, is of absolutely no consequence, yet they call USB, with its 2% wow and flutter, “perfect.” Echoes of when digital audio was introduced as “Perfect Sound, Forever.”

Let me repeat: All clocked circuits have jitter and there is no such thing as a jitter-free clock. With respect to DACs, there is only one place where timing is critical and that is the moment of the D-to-A conversion. When the clock period can vary by as much as 500ns, as with USB, or 8-10ns, as with asynchronous reclocking, who cares about a few picoseconds?

peakrchau

  • Jr. Member
  • Posts: 82
    • http://www.angelfire.com/ca/rchau/audio.html
Slave/Master relationships in the audio world
« Reply #17 on: 19 Apr 2005, 05:23 pm »
Quote from: Ulas
...Some audiophiles obsess over a few picoseconds of jitter in S/PDIF, which, if your equipment is not broken, is of absolutely no consequence, yet they call USB, with its 2% wow and flutter, “perfect” ...


I'm afraid that you are still misrepresenting the "dac clock" jitter in a USB DAC...it can be independent of the data packet flutter that you mention. In you example of the 44/45 samples, the packets could be sent over two microseconds (not miliisecond) interval.  The two packets contain  all the information needed by the DAC can be sent and held in a two  deep (in terms of packet depth)  buffer while the the USB on the PC side can service 998 other packets of data related to other USB devices (ie. 1000 * 1uS = 1millisecond) before needing to return to service the USB DAC.  A better scheme would be to make the buffer 20 deep and to send 20 packets worth of data (20 microseconds of USB time) at a time so that data would not need to be resent for a 10 millisecond redbook interval. In between, the data could be parceled out from a local buffer  to meet redbook requirements with a asynchronous clock free of the confines of large jitter native to the PLL based SP/DIF scheme.

For audio we do not care about the few picoseconds on a single cycle (like you said) BUT we do care about how this jitter accumulates over time and manifests itself at lower frequencies. As I said, USB gives us the potential to remove this component of jitter  due to the oversample packet  potential of USB 2.0 and this is what people are hearing in hard-drive based playback.



PeAK