OK I have got a couple of minutes so I will hit another one...
This is a error correction strategy, called automatic-repeat-request (ARQ) scheme with a majority-logic, in this case repeated reading. With this scheme in place, it can effectively clear most of hardware misreadings caused by vibrations, defective laser head etc or marginal pressing errors on the CD.
OK agreed... I still dont know if I would call this error correction as it is REREAD'ing repeatedly until it has reached a >50% provable perfect read on the bits from the disk... I would call this error supression as it still does not insert false data that it 'thinks' or assumes should be there. However thats simply semantics but...
So to check your hardware quality, don't use it as it mask misreadings of a CDROM drive, unless you can force it read only once. But it's good to use while actual copying to eliminate the effect of random laser pick-up errors.
Now perhaps its me being dense but I dont get this (sorry I am honestly not trying to argue here).. First you state initially that this is a good method of extraction that clears most extraction errors... Then you state it should not be used to mask (I would say eliminate or reestablish) read errors when extracting... Why would you want it to read only once ?? I can see a speed argument but thats moot really... Are you assuming something to do with reburning or using more CD's with this inherant read issue as a medium to store an audio archive ??
In music CD drive, I don't think the average CD players will have complex algorithms to do a good error corrections so that the chance is that all corrections are dependant on the drive. Open a CD player then you will know that there is no computing chips there are able to do so. Therefore, errors will be passed to DAC and result in distortions.
100% agreement... This is why I rip with the best DAE extraction mechanism I can (EAC) and then I shun the CD as anythimg more than a backup medium... Why risk these errors reading again once you have it done right (and have all that hassle of handling CD's)...
According to what you described, it is true that it does not make use of the error correcting codes embedded in the data, however, as said above, it implments an ARQ scheme on top of that.
In my opinion, after that, the software should further correct errors caused by CD pressing, scratchs etc, making use of the embbeded error correcting codes on the CDs before it burn into a new CD. This way, the copied CD will be cleaner, which will benifit the average CD players that do not do sophisticated error correction.
To my mind the ARQ scheme is the most valid means to test whether the raw data is repeatedly the same... I would be curious to see what happens when you filecompare 2 SPDIF captures on a CD player that plays a vanilla CDR and one that plays a black CDR... I think we are agreeing that logically this is all down to the read errors involved but personally I would rather eliminate them (or at least know where a sector cannot be reliably defined as 100% repeatable) and be satisfied that the rip was done to the highest accuracy possible...
I should point out that in the system I use I have a 640 GB RAID5 array hung off of my home network to store ripped CD's and DVD's (as well as be an X10 server / cam monitor / etc) so that I can jukebox them around the home and in my HT... All ripped audio is kept losslessly with APE (Monkeys audio) files... For me all this business about player read errors is moot if EAC gets the data 100% read error free... If not, it sure as hell does a better job than any realtime extraction system I know of as it allows the 80 something reads on any bad sector to establish its data as mentioned a few posts back...
None of this though is even touching why 2 SPDIF streams which should be providing indenticle 'transport' functions can be determined apart... Then we have to look into the timing clocks and jitter arguments... That is as they say, another whole kettle of fish...