Question - if it's a software controlled volume control with the DA chip, doesn't that then mean it is losing or stripping bits to accomplish level gain reduction, like all the other software based controls do?.........or it it hardware based preamp control?
Here are Mithat's comments clarifying the volume control implemented in the DVA Digital Preamplifier.
Let's start with the fact that the DigiPre uses a DAC IC with 32 very linear bits. This translates to over four
billion steps and a "digital" dynamic range approaching 200dB. This should be enough to help convince you that with 32-bit converters you can burn off a lot of bits before any potential digital truncation artifacts have even a chance of being audible.
Next, let's talk about the attenuation (i.e., volume control) in the DigiPre. As is the case with the vast majority of DACs with level controls, it's done in the digital domain. In most systems, we find that about 30dB of attenuation is needed. This means that in most systems, about five bits of the input signal will be truncated during typical use, leaving us with in effect a 27-bit converter. Considering that the vast majority of music releases aren't offered beyond 24 bits, you should now be feeling OK with things being done this way.
But if you're still feeling uncomfortable about "throwing bits away", possibly because you have some recordings in a 32-bit format, consider that both practice and theory have incontrovertibly shown that a properly dithered digital conversion system acts
exactly like an analog one in terms of resolution. Not "kind of" or "mostly", but
exactly. It has exactly the same "infinite" resolving capacity as an analog system that has the equivalent wideband noise. This seems counter-intuitive at first, but it's really and truly how digital systems work. The easiest way to add this "analogizing" dither is to mix 2 LSB of noise into the signal before conversion or truncation.
1So, assuming proper dithering, the only thing you're going to lose in truncating from N to N-M bits is signal to noise ratio. All the signal will still be there; it'll just be bathed in 2 LSB more noise.
The next question then is how much noise do you need to make sure there's enough noise to properly dither at 27 bits? That would be around 150dB below full scale. Relative to a 125dB signal, which is loud enough to cause pain and permanent damage in minutes, noise that's down 150dB from this would be 25dB below the threshold of hearing. To get the noise to be
at the threshold of hearing, the signal would have to be loud enough to rupture your eardrum. Most people don't listen this loudly, and if they do, it's extremely unlikely they can hear 25dB above, let alone below, the threshold of hearing.
More importantly, I challenge anyone to demonstrate a recording with an SNR that remotely approaches 150dB (excluding computer-generated test files). To the best of my knowledge, they just don't exist. The physics of air and electrons makes this effectively impossible. So, any 32-bit recording you care to listen to will already have more than enough noise in it so that it will fully dither when truncated to 27 or even fewer bits -- without even trying.
The final thing you need to consider is that the alternative to "losing" bits via the truncation described above is to send the converted analog signal through another active stage with some kind of level control either in between or as an integral part of the active stage. I'm still waiting for one of those that's completely transparent. So, whichever route you take, you're losing something. With the digital route, what you potentially lose is something that is so incredibly far below the threshold of audibility that I don't lose even one bit of sleep over it.
1There's still a lot of misunderstanding about dither and what it does. Vanderkooy and Lipshitz have published a number of papers in the JAES covering this if you want to take a deeper dive.