0 Members and 1 Guest are viewing this topic. Read 27953 times.
When I do an LP to CD transfer I use 24/96 and normalize to 16/44.1.
And now your logic falls apart... You are making the mistake of assuming that the average listening level takes one watt, and that's not always the case.
Let's work from SPL levels for a moment. Let's assume that the average listening level is 80dB (which with a truly dynamic recording is pretty loud), Now let's add your 30dB for a maximum level of 110dB (which is damned loud, probably far too loud for a typical domestic situation, but what the hey!)
Now, if a SET fanatic
has a loudspeaker with a sensitivity of 100dB,
he/she needs an additional 10db of gain, whcih equates to 10 watts - easily doable with SET technology.
So, no, SETs wouldn't disappear, in fact, they might become more prevalent, given the extreme costs of 1000w speakers and amps.
The other issue, you kind of touched on. Given the extreme dynamics, the average user, in an attempt to maintain sanity and domestic tranquility, would have been driven from CDs and would either give up "hi-fi" or turn to a saner medium. Dare I say it, LPs!
.... I liked your post tnargs Welcome aboard.Most consumers (and many audiophiles) probably wouldn't enjoy a recording with 30 dB of dynamic range.
Something that seldom mentioned in discussions of the CD's alleged dynamic range is the increasing distortion that occurs when the recorded signal is described by less than 16 bits. A quick glance at measurements accompanying any review of a DAC in Stereophile will show the terrible amount of distortion at -90, even at -60dB you have more than 3%THD. The huge dynamic range claimed for the CD isn't really there.Scotty
... My case for the illusory nature of the 16 bit mediums dynamic range still stands. The CD has about the same usable dynamic range as a vinyl record, roughly 60dB.
Guess we disagree. I was an open reel user for 40 years until recently. I've still got a basement full of test gear, including a signal generator and THD and IM distortion analyzers. I'll run some tests this weekend and post results.
Try starting with a best case all 16bits used THD of .002% which equals a distortion attenuation of about -94dB.Now degrade this distortion attenuation by about 60dB which roughly equals subtracting 10bits from the 16 bits available to describe the waveform. Now you have a distortion attenuation of about -30dB which equals about 3%THD and there you are. Each bit equals 6dB and a fraction more, which is how you get about 30dB of attenuation instead 34dB or so. If it was exactly 6dB you would have 2% THD which twice the industry maximum THD commonly accepted as the cut off point in amplifier measurements. If you go down to the 16bit dynamic limit you have zero bits left to describe the waveform and 100% distortion of the waveform.Scotty
Yep.More bits and a higher sample rate are good for an initial recording, mixing, processing, etc... but normalizing that down to 16/44.1 is just fine!