BDA-1: 16 v. 24-bit setting in OS X "Audio MIDI Setup"

0 Members and 1 Guest are viewing this topic. Read 4063 times.

bob stern

  • Jr. Member
  • Posts: 44
When playing 16-bit, 44.1 KHz audio files from the Toslink output of a Mac, does it make any difference whether Audio MIDI Setup is configured to output 16/44 versus 24/44?

When the BDA-1 receives 16-bit data, does it convert the data to 24 bits before sending it to the SRC4392 for upsampling from 44.1 to 88.2?  Or does the BDA-1 operate the SRC4392 and CS4398 in 16-bit mode when it receives 16-bit data?

If the BDA-1 converts 16-bit data to 24 bits before before sending it to the SRC4392 for upsampling, then I imagine it would be irrelevant whether Audio MIDI Setup is set to 16 or 24 bits.

Conversely, if the BDA-1 operates the SRC4392 and CS4398 in 16-bit mode when it receives 16-bit data, it might be better to force it into 24-bit mode by setting Audio MIDI Setup to 24 bits.  My thinking is that the upsampling from 44.1 to 88.2 performed by the SRC4392 employs an interpolation algorithm to double the number of samples, and that this interpolation will be more accurate with 24 bit output because some of the interpolated data values will be between two adjacent 16-bit values.  In fact, it seems to me that this interpolation produces the same benefit as dither.

(I assume that if Audio MIDI Setup is configured to output 24/44, then Core Audio will not perform dither, but will simply pad the 16-bit data with zeroes to convert it to 24 bits.  However, this seems no worse than letting the SRC4392 or CS4398 chip in the BDA-1 perform the conversion from 16 to 24 bits because the spec sheets for the SRC4392 and CS4398 do not mention dither.)

FYI, my question is prompted by a topic to which James responded on the Computer Audiophile forum:
http://www.computeraudiophile.com/content/Connecting-Bryston-BDA-1-MacBook-Pro
« Last Edit: 16 Jun 2009, 01:17 am by bob stern »

James Tanner

  • Facilitator
  • Posts: 20483
  • The Demo is Everything!
    • http://www.bryston.com
Re: BDA-1: 16 v. 24-bit setting in OS X "Audio MIDI Setup"
« Reply #1 on: 16 Jun 2009, 12:26 am »
Hi Bob,

You're way over my head on this one - I will forward to engineering.

james

ian.ameline

Re: BDA-1: 16 v. 24-bit setting in OS X "Audio MIDI Setup"
« Reply #2 on: 16 Jun 2009, 02:34 pm »
Hi Bob, James,

The short answer -- on the mac, set the word length to 24 bits, and the sample rate to match the source material. On the BDA-1, have upsampling on. You are correct in assuming that there is no difference if the source sample rate matches the output sample rate -- it's only a question of where the extra 0s come from.

Why?

Toslink/spdif has room for 24 bits/sample -- when sending 16 or 20 bits, the remaining low order bits are 0. The SRC4392 chip in the BDA-1 takes those 24 input bits, and uses 28 bits (fixed point, not floating point) internally for its math, and produces 24 bits of output/sample. The extra guard bits in the SRC chip ensure <0.5 ULP (in a 24 bit result) numerical error in its computations -- When setting the word length on the mac to 24 bits, and the sample rate to match the source, the mac just 0 pads the low order bits and sends the samples unchanged to the output. (this is no different than setting to 16 bit output when playing cd sourced material).

Where it gets interesting is where the source sample rate does not match the specified output sample rate. In this case the mac converts the sample rates (using 32 bit floating point math in the CoreAudio library), converting, at the last stage, back to fixed point at the specified output precision. You want this final output precision to be as high as possible. (The 32 bit FP math sounds better, but has less numerical stability than the 28 bit fixed point math the SRC4392 uses. Why did apple do it this way then? The short answer is that getting these things right when using fixed point math is *hard* -- it's much easier to use floating point -- the resulting error in a 24 bit result is going to be 3 to 4 ulp -- ie 22 to 23 good bits) (ulp = units of least precision). So if you have to do sample rate conversion, it's best to leave it to the BDA-1 to handle that.

The BDA-1 is *always* converting sample rates -- even when upsampling is off on the BDA panel. Why? Jitter elimination involves reclocking and that involves continually estimating the input sample rate and converting to a sample rate generated from a known good internally generated clock. This is what the SRC-4392 is designed to do. When upsampling is off, it is set up to generate a sample rate at a known value most closely matching the estimated input rate chosen from the short list seen on the front panel of the BDA-1. When in this mode, the BDA-1 has the DAC chips running with a higher oversampling (depending on the sample rate the DACs are seeing -- at 192 khz, they oversample 32x, at 44khz, they oversample 128x (from memory -- before my morning coffee)).

Oversampling vs upsampling -- Upsampling is changing the sample rate by computing new samples -- this is what the SRC 4392 does. This reduces sampling noise, and makes it possible to use very gentle rolloff filters.

Oversampling is taking the *same* sample value, and pushing it through the dac many times. Hoping that the error/noise averages out over some 32 or 64 different tries. And it does :-).

Both are useful, and the combination seen in the BDA-1 produces amazingly good results.

-- Ian.

James Tanner

  • Facilitator
  • Posts: 20483
  • The Demo is Everything!
    • http://www.bryston.com
Re: BDA-1: 16 v. 24-bit setting in OS X "Audio MIDI Setup"
« Reply #3 on: 16 Jun 2009, 02:46 pm »
^
Dan our resident digital expert and designer of the BDA-1 is away so my thanks to Ian for his excellent response.

james

ian.ameline

Re: BDA-1: 16 v. 24-bit setting in OS X "Audio MIDI Setup"
« Reply #4 on: 16 Jun 2009, 03:24 pm »
When I say "The 32 bit FP math sounds better" I mean the *idea* sounds better, when it is actually not. In practice, it probably sounds the same. (I don't actually believe that a properly conducted double blind listening test would show any perceivable difference between 32 bit IEEE FP vs 28 bit fixed point math for sample rate conversion, all other things being equal.) In theory, 32 bit FP math introduces slightly more numerical error than 28 bit fixed point, hence my preference for letting the BDA-1 doing the sample rate conversion work.

-- Ian.

bob stern

  • Jr. Member
  • Posts: 44
Thanks, Ian!
« Reply #5 on: 16 Jun 2009, 09:07 pm »
I was hoping you'd jump in!

I subsequently found an excellent description of S/PDIF that's very straightforward (curiously, with no identification of the author):
http://www.epanorama.net/documents/audio/spdif.html

It confirms Ian's statement that S/PDIF always includes 24 data bits, with the 8 LSB's set to zero in case of 16-bit data.  Therefore, a 24-bit DAC such as the BDA-1 effectively treats all sources as 24 bit.

Conclusion:  When playing 16-bit audio files, it makes no difference whether the computer is set to output 16 or 24-bit data.  Leaving it set at 24-bit is safest so that you won't lose resolution if you intentionally or inadvertently engage the iTunes volume control or sample rate conversion (as described by Ian).

ian.ameline

Re: BDA-1: 16 v. 24-bit setting in OS X "Audio MIDI Setup"
« Reply #6 on: 16 Jun 2009, 10:51 pm »
Forgot about volume control in software -- yes, that is another good reason to leave the output depth at 24 bits. Virtually no precision will be lost from a 16 bit CD source when lowering the volume.