The CS4398 DAC specs for distortion+noise relative to a full scale 997 Hz sine wave is -107 dB, which is (107-2)/6 = 17.5-bit resolution. At -20 dB the distortion+noise spec is -97 dB, which implies resolution relative to full scale of 117 dB, which is 115/6 = 19-bit resolution.
Since the DAC chip has only 19-bit resolution (add another half-bit for two DAC's used in balanced mode), it is unclear whether a difference between 27 and 28 bit precision in the SRC interpolation filter will be audible. I expect a difference in the SRC algorithm would be more important than the 1-bit difference in precision.
let me discuss your second point first -- you are correct in saying that float has 1 fewer bits of precision then the input or output samples from the sample rate converter, but if you use single precision float, the problem is exacerbated by the fact that there are no guard digits to preserve numerical accuracy through the calculations of rate conversion -- this is exactly why the 4392 has 4 extra bits of precision in its data paths and ALUs. So I'm talking about a 23 to 28 bit difference. This increased precision in the 4392 could certainly be matched (and exceeded) by the CPU were it to use double precision floats, or even keep things fixed point and use 32 bit integers for the calculations. But this is not how I understand that core audio works on the mac.
I agree with you that the choice of algorithm is probably more important than these precision nit-picks (but picking numerical precision and stability nits is part of what I do for a living, and I like it

. It is also unlikely that Apple is using the same algorithm -- as the one the SRC4392 uses is patented by TI. (I've had a read of that patent, and I like their algorithm -- I think it's quite elegant.)
As to the first point -- the spec sheets disclose minimum and "typical" numbers. (page 39 of the spec sheet defines their measurement terms, but gives little detail on what circumstances under which these were obtained (temperature, power supply stability etc) -- there is a weak implication that they were obtained using the CDB4398 reference board). Yes, you are right that adding a second dac in balanced mode adds 1/2 bit of effective precision. But I wonder how bryston measures 140db s/n (unweighted) with an AP2700 (implying 23 bits). Could they be selecting DACs? (They've been known to hand select components before.) Perhaps they have a more stable power supply than Crystal assumes when giving "typical" specs.
For speculation; They could be measuring differently, say -- not just at 997 hz (nice and prime

. The measurements given by Crystal are assuming a 1kΩ, 10pF test load, and I bet that is not the same as what bryston's discrete op-amps are presenting (I believe they're in the 20kΩ to 50kΩ ballpark) -- I suppose they could have screwed up the measurements, or the AP analyser could be wrong -- both of these seem unlikely to me given the reputations of the companies involved.
Back to you
-- Ian.
/In case you couldn't tell, I like a good debate
