If says about quality for base of opinion need results of test (that is not public available currently).
Often I read in a forums, that "ripper N is the better than M". But it is feellings only.
Real figures may show that M is better than N.
Other people says, that a ripper is the best, because it use checksum database.
But after writing it in the formulas we can see that there is not all so simple as look at first sight.
I think, that it is need to make efforts to increase correct error detection probability without refering to external non-certified database.
When we read audio data as raw stream with error flags, we can to try restore damaged data even.
Such algorithms is "Paranoia
" (open source) and Audiophile Inventory's "Ripping with deep error control"
As far as I know dBpoweramp use same algorithm, like described in my article.
These algorithms have common base, but different implementations.
So need practically check their abilities in figures to correct conclusions.
Also there are different kinds of disc damages, that can change test results, I suppose.
In the article I wanted to say, that we can't rely to feelings in CD ripper estimation.
We can't say that ripper #1 have more transparent sound than ripper #2.
Errors of CD ripping are coarse enough.
In my ripper I like interface: open tracks as usual files (cda or aiff), no settings for user, one window for use action.
Though, while I don't apply automation for big number of ripped CDs, that done by some competitors.
iTunes have access to the best metadata database Gracenote (paid for commercial software).
XLD is free for Mac only (I don't know about its error detection algorithm).
CDRtools (cdda2wav+"paranoia") is free for Windows only.
It is impossibly univocal rank geterogenous advantages, I suppose.
It is need to consider the ripping tool for your task.
As example, some people look for fast ripper.
But audiophile ripping is not fast, because there are re-reading and error detections with speed adaptation are used.