"first generation cd's all sounded terrible"

0 Members and 1 Guest are viewing this topic. Read 24746 times.

srb

Re: "first generation cd's all sounded terrible"
« Reply #80 on: 19 Oct 2012, 09:43 pm »
When I do an LP to CD transfer I use 24/96 and normalize to 16/44.1.

If you record at a multiple of your target, 88.2KHz in this case, you would have less interpolation error when you then convert to 16/44.1. 
 
Steve

James Lehman

  • Jr. Member
  • Posts: 60
Re: "first generation cd's all sounded terrible"
« Reply #81 on: 20 Oct 2012, 01:07 am »
If the right math is used and the slope of the line between every sample is calculated you won't have any interpolation error.

If you just throw away every other sample to get from 88.2 to 44.1 then what's the point of using a higher sample rate?

*Scotty*

Re: "first generation cd's all sounded terrible"
« Reply #82 on: 20 Oct 2012, 01:47 am »
Bandwidth, specifically twice the recordable the bandwidth that 44.1 has, this allows the anti-aliasing filters to do their job with much lower phase error introduced into the bandpass of the audio frequencies of 20Hz to 20kHz.
Scotty

tnargs

  • Jr. Member
  • Posts: 57
Re: "first generation cd's all sounded terrible"
« Reply #83 on: 20 Oct 2012, 05:09 am »
And now your logic falls apart...  You are making the mistake of assuming that the average listening level takes one watt, and that's not always the case.
Hi BobRex, I made no mistake.
Quote
  Let's work from SPL levels for a moment.  Let's assume that the average listening level is 80dB (which with a truly dynamic recording is pretty loud),  Now let's add your 30dB for a maximum level of 110dB (which is damned loud, probably far too loud for a typical domestic situation, but what the hey!)
Movie soundtracks are designed to be played back at 105dB peaks: it's not super loud if your distortion is low (not LP)
Quote
  Now, if a SET fanatic
thats me!
Quote
has a loudspeaker with a sensitivity of 100dB,
yep, me!
Quote
he/she needs an additional 10db of gain, whcih equates to 10 watts - easily doable with SET technology.
you have reduced my db values to suit your example, I was saying 85-90dB average listening level. Normal home theatre soundtrack is intended for playback at 85dB average. It's loud but obviously sensible to use it as an example. This boosts your SET power need up to 40W for 84dB or 100W for 90dB, both unrealistic with SET technology -- and remember a SET at max rated power is typically wheezing at 10% THD. But there is a bigger problem (literally) - see my next comment.
Quote
  So, no, SETs wouldn't disappear, in fact, they might become more prevalent, given the extreme costs of 1000w speakers and amps.
Well one of my points is that the industry would have come up with much better, and better priced, 1000W systems by now if they had been at it for 25+ years. And a 100dB/W bass bin that delivers 20Hz is unrealistically large. The industry would never have migrated to that solution IMHO.
Quote
 

The other issue, you kind of touched on.  Given the extreme dynamics, the average user, in an attempt to maintain sanity and domestic tranquility, would have been driven from CDs and would either give up "hi-fi" or turn to a saner medium.  Dare I say it, LPs!

Which kind of is saying we get the fidelity we deserve! But such people are not serious audiophiles. Serious audiophiles would have developed their systems to deliver the vast and much more realistic dynamics delivered by CD's with 30dB of headroom. Able to present the dynamics of live music.

tnargs

  • Jr. Member
  • Posts: 57
Re: "first generation cd's all sounded terrible"
« Reply #84 on: 20 Oct 2012, 05:12 am »
.... I  liked your post tnargs  :thumb:  Welcome aboard.

Most consumers (and many audiophiles) probably wouldn't enjoy a recording with 30 dB of dynamic range.

Cheers, thanks for the welcome!

I believe standard movie soundtracks have 20dB of peak headroom. Music, our passion, needs to imitate reality with the dynamics of live music. For our sakes. Just being selfish.

tnargs

  • Jr. Member
  • Posts: 57
Re: "first generation cd's all sounded terrible"
« Reply #85 on: 20 Oct 2012, 05:17 am »
Something that seldom mentioned in discussions of the CD's alleged dynamic range is the increasing distortion that occurs when the recorded signal is described by less than 16 bits. A quick glance at measurements accompanying any review of a DAC in Stereophile will show the terrible amount of distortion at -90, even at -60dB you have more than 3%THD. The huge dynamic range claimed for the CD isn't really there.
Scotty

I think at -60dB a CD is at 0.22%. At any frequency.

Whereas LP is 10 times that at its absolute optimum signal level and frequency.

So, by my figuring, it's no contest, and the dynamics of CD is indeed huge.

tnargs

  • Jr. Member
  • Posts: 57
Re: "first generation cd's all sounded terrible"
« Reply #86 on: 20 Oct 2012, 05:20 am »
...  My case for the illusory nature of the 16 bit mediums dynamic range still stands. The CD has about the same usable dynamic range as a vinyl record, roughly 60dB.

I think you have the wrong information about CD, see post above, Scotty. There really is no contest.

*Scotty*

Re: "first generation cd's all sounded terrible"
« Reply #87 on: 20 Oct 2012, 05:43 am »
Try starting with a best case all 16bits used THD of .002% which equals a distortion attenuation of about -94dB.
Now degrade this distortion attenuation by about 60dB which roughly equals subtracting 10bits from the 16 bits available to describe the waveform. Now you have a distortion attenuation of about -30dB which equals about 3%THD and there you are. Each bit equals 6dB and a fraction more, which is how you get about 30dB of attenuation instead 34dB or so. If it was exactly 6dB you would have 2% THD which twice the industry maximum THD commonly accepted as the cut off point in amplifier measurements. If you go down to the 16bit dynamic limit you have zero bits left to describe the waveform  and 100% distortion of the waveform.
Scotty

*Scotty*

Re: "first generation cd's all sounded terrible"
« Reply #88 on: 20 Oct 2012, 06:34 am »
As far as how early CDs sound, say 1980s vintage, when compared to 1990s and 2000 onward, I have noticed a steady improvement in harmonic completeness and three dimensionality of the sound stage. The earliest CDs sound thinner and have a smaller sound stage in all three dimensions. I sometimes wonder if all 16 bits have actually been used in the recording.
  If all 16bits have been consistently used in recording the music, then the improvements I have heard in the 16 bit format over time are due to the advancements in recording technology and digital mastering techniques. At this point in time I will happy to just be able keep the sound quality available from the 16 bit format. There certainly doesn't seem to a great demand for even CD quality sound on the part of the average consumer.
Scotty

Rclark

Re: "first generation cd's all sounded terrible"
« Reply #89 on: 20 Oct 2012, 06:56 am »
30 Years of Perfect Sound Forever

Eagerly anticipated since the digital audio revolution in recording studios in the late 1970s, Sony announced the CDP-101, the world's first Compact Disc player, on October 1st, 1982.

The Compact Disc was developed in concert by Sony, who handled the DSP, and by Philips, who had experience with optical discs.

Sony and Philips each owned large record companies as well as electronics divisions, so they had everything to gain. Other record companies hoped it would all go away, wanting us to pay money for more of the same old LPs instead of new CD players and having to dual-inventory recordings.

Philips dubbed the Compact Disc as "Perfect Sound Forever," and they weren't kidding. My 30-year-old CDs still sound incredible, and lost to history after video replaced music in the late 1980s for most people's home entertainment is that CDs still offer the best possible sound today, still representing a completely transparent window to the original recording.

CDs as a recording medium are completely uncompressed, unadulterated and bit-for-bit accurate, even if you boil them or drill a hole through them.

Any flaws, like with any medium, are because people rarely record well enough to them to use all the range of which CD is capable. If aa CD doesn't sound fantastic, that's because you've got a flawed recording, not a flawed medium. It's no better than whatever sound people choose to put on it. As a medium, the 16-bit 44.1 ksps (kilo-samples per second) CD is capable of more dynamic range than music itself, as I'll explain.

While professional editing, mixing, processing, equalizing and level shifting usually use more data bits for computation (24 bits linear, 32-bit floating point or now 48-bit linear), 16 bits is more than enough for unlimited fidelity as a release format.

The reason we use more bits in production is so we can create and preserve a true 16 bits through the whole process after all the truncation and rounding and nastier stuff that goes on between the microphone and your CD.

16 bits is more than enough, and with popular music today, even 8 bits is more than enough.

How is this?

16 bits have a signal to noise ratio of 98 dB (theoretical SNR = (bits x 6.0206) + 1.72 dB). That doesn't sound like much compared to 24 bits theoretical 146 dB, but realize that a library's background noise is about 35 dB SPL. Your house probably isn't any quieter. A full symphony orchestra giving it all it's got (ƒƒƒƒ) peaks at about 104 dB SPL. Let's give the orchestra 105 dB, and 105 dB - 35 dB = only 70 dB real dynamic range if you brought the orchestra into your home.

Even though some people can hear to 0 dB SPL, we're always hearing background noise if we shut up and listen. It takes a lot of money to build an NC 25 or NC 15 studio, in other words, a recording studio with about a 15 dB or 25 dB SPL background noise. Even in an NC 15 studio, 105 - 15 = 90 dB SPL, well within the range of real 16-bit systems, if you record it well.

Supposing we recorded on the moon in a pressurized tent with no background noise? Well, the self-noise of most recording studio microphones is about 16 dB SPL equivalent input noise, or in other words, microphones aren't any quieter than about 16 dB SPL anyway.

16 bits was chosen because it has more than enough range to hold all music. I know; I was doing 16-bit recording back in 1981 before the CD came out, and my recordings would have their levels carefully set so the loudest peak of the entire concert hit about -3 dB FS, and leaving it running after the audience left and the hall was empty, you can still bring up the playback gain and hear a perfectly silent recording of the air conditioning noise in the hall. The world just doesn't get quiet or loud enough to need more than 16 bits as a release format, if it's recorded well.

There is no such thing as a real 24-bit audio DAC or ADC. Look at the specs, and you'll never see a 144 dB SNR spec; all audio 24-bit converters do have 24 bits wiggling, but the least few LSBs are just noise. There is plenty of 24-bit and higher DSP, which is good to keep the 16-bits we need clean, but you're never getting 24 real bits of analog audio in or out of the system. It's a good thing you can't; 140 dB SPL is the threshold of instant deafness, and if you lift the gain enough to hear a real 24-bit noise floor at say 20 dB SPL in a very quiet studio, maximum output would be 20 + 144 = 164 dB SPL, or 4 dB over the threshold of death. Yes, 160 dB SPL kills.

But wait, there's more. 98 dB is the theoretical SNR. With dither, we still can hear pure undistorted signals down into the noise for at least another 10 or 20 dB. While a typical real-world 16-bit system's SNR might be 92 dB, we can hear tones down to -100 dB FS easily. That's over 100 dB of dynamic range in real 16-bit systems.

There's even more than that! By the 1990s, people learned how to "noise shape" the dither to push it up mostly to 15 kHz and above, so it became much less audible, but just as effective as regular dither. These systems made the noise much less audible. These systems are also called Super Bit Mapping (SBM) by Sony and UV22 by Apogee; they claimed 22-bit effective SNRs with 16-bit systems. They didn't really work that well, but they did make our 16-bit system even better than it was. These clever sorts of dither are still used today for 16-bit releases.

That's right: done right, 16 bits is way, way more than enough for any sort of music. Once you've heard it done right, you'll realize any noise you here out of a CD is due to sloppy recordings (usually sloppy level settings someplace in the chain), not the CD medium itself. GIGO as you computer guys say.

When the CD came out, it was like something from another planet. No one outside the recording industry had ever heard completely silent undistorted recordings. LPs had not only clicks, pops and scratches, but they also were usually loaded with distortion (we used to tape our new LPs so they wouldn't get worse), they were rarely pressed on-center so the pitch varied as the disc rotated, and warps made our woofers flutter like crazy. LPs were nasty, compared to pure live music. In radio, "cue burn" was the first few seconds of grunge you'd get from back-cueing the same record 100 times to find its start.

in 1982, no one except computer nerds had computers. It wasn't until the late 1980s that hard drives were seen commonly, and then they were only 10 megabytes, an astounding number. By 1985, computers still only used 5-1/2" floppies, which held only 720 kilobytes if you had the HD ones. Microfloppies, the 3.5" kind, were crazy stuff when Apple first used them on a computer in 1987. They were small, tough, and held an amazing 1.44 megabytes. Even until about 1992, only engineers had computers at work.

The CD in 1982? It held an unfathomable 650 Megabytes, or as much as 65 hard drives would be able to hold three years in the future! Even in 1985, no one could afford a 10 MB hard drive. I worked in defense in 1985, and we did our calculations on computers with dual 5.5" floppies; no hard drive. That's why hard drives are called the C: drive; the A: and B: drives are your two floppies: one for the program, one for your data.

Anyway, CDs were always laser rocket science. It wasn't until about the year 2000 that anyone could afford a CD burner.

Some people forget today that the CD is a 100% bit-accurate medium. It puts the same data on the disc in multiple places, and using various kinds of error correction and detection and eight-to-fourteen modulation, so no matter what happens, you get everything back exactly as it was recorded. You can even drill a small hole in an CD, and the data will be recalled with 100% accuracy, since the CD player simply pulls the data from different sectors.

Today, there is still nothing better, and nothing even as tough.

The SACD was a marketing ploy around 2002, but it's huge problem is that SACD puts out a ton of ultrasonic hash (noise) even when it's working perfectly. SACD player instruction manuals warn not to crank the levels during silence, because this ultrasonic noise might blow your tweeters! CD players haven't needed sharp 20 kHz anti-alais filters for decades, but SACD players need them today!

Here's an anecdote about how bad is the noise out of a good SACD player. I was playing around dubbing to cassettes, and something sounded horrible, as if the tape was all twisted and garbled with Dolby, even Dolby B. A little red light went on in the back of my head, and I said "No, it couldn't be this bad," and hit the multiplex filter on the cassette deck. That cured it. In all my years as an FM radio station chief engineer, I never found any FM tuner so bad that it didn't filter the 19 kc pilot well enough to need the MPX filter. Never. But welcome the SACD, and lo and behold, its output is laced with so much ultrasonic crap that I needed the MPX filter to get Dolby to track. Horrendous! My iPod is much cleaner (and pretty darn clean, too)!

But what about people today sharing files and pumping them into fancy outboard DACs from their computers? That can work great, but there are a few reasons why a good CD player can be better than a great outboard DAC:

1.) Jitter. A CD player has no measurable jitter. Data is read and corrected from the disc, and the data fed to a first-in, first-out buffer. Data is clocked out of the FIFO into the player's own DAC at the exact rate of the quartz-crystal oscillator of the CD player. The disc's rotational velocity is varied in a closed-loop to feed the FIFO exactly what it needs, all controlled by the player's one low-phase-noise and low-jitter quartz crystal oscillator. The only jitter is the residual of the quartz oscillator, which actually has less phase noise (jitter) than an atomic standard!

When you use an outboard DAC, unless you're a professional and have a Word Clock or other separate Sync output fed to your DAC for the clock signal, the DAC has to guess at reconstructing the clock signal from the audio data it's fed via TOSLINK or USB or RCA or AES. (those interfaces carry only data, not clock.) Noise added to the natural high-frequency attenuation in any length of cable adds jitter to the recovered clock, and as my own tests have actually shown, there is a measurable increase in measured jitter actually seen on the analog outputs of outboard DACs versus direct from a one-box CD player. This tiny amount of jitter isn't significant, but seeing how there is a cult of whackos who worry about things that are far less significant, the fact that I can measure and show jitter picked up in a top-notch DAC at its analog output under very good conditions impresses even me.

2.) Ground Loops. If you use an outboard DAC, use the optical TOSLINK connection. If you don't and you take a digital input from a computer via USB, Firewire, RCA or any other wire, you're now coupling any ground noise from your computer's digital circuits into your audio ground.

As a guy who used to design ADCs, DACs and DSP systems, we do everything we can to keep the digital hash out of the analog circuitry. Never, ever connect the two grounds together at any more than one point, and that one point will probably be your power outlet at the wall. Don't go using USB or similar and connect your computer's ground to your audio system!

3.) Noise. Most computers have fans or hard drives that make audible noise. Most CD players spin silently.

4.) Overload Handling. This is a potentially really nasty one that needs more research. In the beginning, CDs were cut with 0 VU at -20 dB FS, in other words, there was plenty of headroom. The world's first released CD, Billy Joel's 52nd Street, never even hits full scale, and it sounds great.

Once everyone had a CD player in the 1990s, some bonehead got the dumb idea that if he made his CD sound louder than the next guy, that people would like the music better. Dumb idea, yes, but as of today, most popular CDs have so much dynamic compression applied that they sound as bad as radio: one big long 100% modulation wall of boring. Jazz, classical and a very few acts like Peter Gabriel's latest still use all the dynamic range, but just about everything else today is squashed to death to put everything at 100% loud. Today, most CDs only use the top couple of bits!

Worse, CD mastering continues to get worse in its attempts to get louder, and many CDs use another radio trick, composite clipping. Yes, the waveform is boosted even more and the peaks of the waveform are clipped, and since most people won't know, helps squeeze another dB or two of level onto the CD.

Today, some albums have levels when measured with a Tektronix 764 that exceed 0 dB FS! How do they do this? Well, levels are calibrated to read 0 dB FS for a sine wave, but when a proper meter like the 764 is used that properly reconstructs the actual audio waveform digitally as opposed to simply looking at data stream values, clipped signals approximate square waves, and approach +3 dB FS!

This is all fine and dandy played on a CD player, which simply reproduces the music, clipping and all, as recorded.

It can wreak Hell when you start ripping that to AAC for your iPod, or play it on an external DAC, most of which aren't designed to have enough headroom to reproduce the crazy transients that are there with 100% clipped signals. Most audio DSP norms were created back before producers started putting such nasty signals on music CDs.

As my CD player and outboard DAC tests have shown, weird things happen when playing extreme square wave tests. Outboard DACs for whatever reason often lack the headroom in their DSP for this baloney, and someone needs to do more research to see what happens with real, loud, CDs when attempting to reproduce them over an outboard DAC. Look at the spectrum of a square wave played by a good CD player and that same disc played with a great outboard DAC. You should only see odd harmonics; the even harmonics from the outboard DAC are from clipped transients. (PS: I point to the Benchmark DAC1 HDR simply because it's the world's best outboard DAC; you don't want to see what lesser DACs do to these signals.)

I was totally excited when the CD came out and for the first time in my life I could get essentially direct copies of the master tapes just by buying a CD, and that those CDs still sound perfect 30 years later. In fact, my old CDs usually sound better than newer ones, which are all squashed to death by today's remasters.

You folks might also be tickled to know that most recordings are made with vacuum-tube powered microphones today plugged into vacuum-tube preamps before they're digitized and fed into ProTools software. Tubes rule in the world of pro sound.

If you don't like what's coming out of your CD player, try a better CD recording. The CD itself is incredible, but few recordings really show you what it can do. Blame the producers who think we're too stupid to turn up the volume on our iPods if they actually used some dynamic range.

Today's moral? Buy more CDs, put them on your iPod and computer if you like, and enjoy them. Get a great DAC if you've got computer stuff to enjoy, but don't waste your time futzing with computer equipment and music software when you can just buy CDs and enjoy the music itself instead of fiddling with stereo gear. God help us that some people waste time fiddling on their computers just to get music; half the reason the general public loves the CD over LP is simple convenience and never having to align a cartridge, flip an album or clean records or worry about wearing them out

*Scotty*

Re: "first generation cd's all sounded terrible"
« Reply #90 on: 20 Oct 2012, 07:26 am »
I would re-title this article, Adequate Sound for the Time Being.
The author appears to be completely unaware of the real world limitations of the 16bit format.
I do agree with his final admonishment, buy more CDs.
Every CD sold helps delay the day when music from your favorite artist is only available as a digital download in the Mp3 format.
Scotty

mitch stl

Re: "first generation cd's all sounded terrible"
« Reply #91 on: 20 Oct 2012, 05:57 pm »
Guess we disagree. I was an open reel user for 40 years until recently. I've still got a basement full of test gear, including a signal generator and THD and IM distortion analyzers. I'll run some tests this weekend and post results.

Pulled out the test equipment this morning and ran some THD tests on a 1 KHz signal through CD and tape. Since my equipment is on the older side (a Heath IM-58 distortion analyzer which is more than fine for the tube equipment I've always tinkered with) I decided to run the tests at -40 dB instead of still lower.

Here are the results. The tone from the signal generator registered at under 0.1%. That's the bottom limit of the IM-58.

A 1KHz signal recorded to CD at -40 dB and played back registered the same. There was no difference on the meter.

The signal recorded to tape at -40 dB and played back showed a distortion level of 1.5% without Dolby and 1.7 to 1.8% with. This distortion level also fluctuated up and down 10% or so due to the moving tape.

When the three sources were played through speakers at normal volume, they sounded pretty  much the same except for the slight tape hiss without Dolby. If the volume on this soft tone was turned up, the CD and source sounded identical, but the background noise on the tape became more noticable.

Now, could the test be more complicated with fancier equipment? Sure. But I've yet to hear a well-recorded CD leave me wishing for a different medium during the soft passages.

In short, there is no appreciable distortion that crops up in a CD, either audible or measured, during soft passages. Analog distortion was worse by a factor of about 15X at  the - 40 dB recording level.


*Scotty*

Re: "first generation cd's all sounded terrible"
« Reply #92 on: 22 Oct 2012, 01:17 am »
Your testing results raise some interesting questions. What would the THD+N have been on the tape deck if the signal had been recorded at the additional VU levels of -30dB,-20dB,-10dB and 0dB? Would we have seen increasing levels of THD as the recording level approached -0dB ? If this had occurred, my contention that the amount of THD associated analogue recording decreases as the signal level falls.
 Why you did not see 0.2 THD when you recorded a 0.1 THD signal at -40dB to CD is something of a mystery. A distortion-less signal recorded to CD at negative -40 should show 0.1 THD distortion when it is measured at playback , plus the residual distortion the DAC has, perhaps 0.002 THD. A distortion free -40dB signal recorded at 16bits still has about 60dB of distortion attenuation which equals 0.1THD.
 The level of distortion you saw from your reel to reel is somewhat disappointing, a commonly claimed distortion level for a cassette deck with TypeI tape is 0.4% THD this degrades to 1.5% THD when TypeIV is used. My Sony KA1ESA 3 Head cassette deck has this sort of claimed spec and I could easily hear the difference between Type1 tapes and Type2 as well as Metal formulations and greatly preferred Type1 tape which was also cheaper than the other two types,go figure.
Scotty

*Scotty*

Re: "first generation cd's all sounded terrible"
« Reply #93 on: 22 Oct 2012, 01:47 am »
They say a picture is worth a thousand words, in this case it certainly is. Here is a graphic of the Burr Brown PCM 1704UK DACs measurements showing the THD+N vs Level performance as a function of bit depth with both 16Bit and 24Bit data expressed  as a Nomograph. It should be noted that the 0dB level corresponds to the lowest distortion possible in a digital recording system and results when all 16 bits or 24bits are used to described the wave form. Mathematically speaking, the addition of another 8bits from 16bits to 24bits increases the distortion attenuation by 48dB at any signal level. If the DAC's were perfect, the lines on the nomograph would be straight lines separated by 48dB and there would no difference between the performance of a real world DAC and the function described by the mathematics governing digital recording systems. NOTE THAT THE THD+D (dB) and THD+N(%) are plotted on a semi-log scale.

See link to PCM 1704UK technical specifications PDF doc 
http://www.ti.com/lit/gpn/pcm1704
Hopefully this information addresses issues I've raised with my earlier comments and explains the limitations and differences between 16bit and 24bit recordings.
Scotty

mitch stl

Re: "first generation cd's all sounded terrible"
« Reply #94 on: 22 Oct 2012, 03:08 am »
Scotty, sorry my results disappointed you. I've already put the gear away and returned everything to the normal locations so not really interested in repeating the exercise for tests at -30, -20 dB and so on. However, my experience is that the distortion value of the tape would drop as the signal increases in volume relative to the background noise. That noise is the primary component of distortion since it represents frequencies other than the signal. That's how distortion meters work.They have a deep notch filter at the test frequency and measure all the remaining frequencies present.

A drop in distortion as the signal increases would be the case until you hit the saturation point of the medium when the distortion level would rise rapidly.

As for your contention that the CD tone at -40 dB should have been double the 0.1% distortion level of the tone alone, you're forgetting that the 0.1% distortion of the original signal is the lowest reading possible from my distortion meter. I don't have a spec sheet for the signal generator, but its distortion level is likely a tenth or even a hundredth of that figure. The CD version could increase it dramatically and still be below the distortion meter's threshold.

While I'm no expert on digital audio processing, I don't buy your explanation that distortion levels in CD signals at levels below 0 dB is any different from what is also true for analog recording mediums.

If you look at the chart you just added, it is pretty evident noise is a primary component of distortion as the signal level drops. The -40 dB distortion for 16 bit is 0.1% - the figure I got in my test (and also the lower limit of my meter).  The same thing is going to be true for analog playback mediums, but worse since tapes & LP both have more background noise. The high distortion figure at -95 dB for digital would move toward -70 to -60 dB, making the curve steeper.

If you hold a straight-edge to the graph and chart from -70 dB on the horizontal to the -70 dB on the vertical right for analog, low and behold, at -40 dB signal level one would expect about 1% to 2% distortion. That's almost an exact match for my practical results.

Thanks for the confirmation!



medium jim

Re: "first generation cd's all sounded terrible"
« Reply #95 on: 22 Oct 2012, 03:42 am »
16/44.1 is more than enough to reproduce or produce a CD that doesn't compromise the original recording, be it analog or in the digital medium.

Jim

James Lehman

  • Jr. Member
  • Posts: 60
Re: "first generation cd's all sounded terrible"
« Reply #96 on: 22 Oct 2012, 05:15 am »
Yep.

More bits and a higher sample rate are good for an initial recording, mixing, processing, etc... but normalizing that down to 16/44.1 is just fine! :)

tnargs

  • Jr. Member
  • Posts: 57
Re: "first generation cd's all sounded terrible"
« Reply #97 on: 22 Oct 2012, 05:24 am »
Try starting with a best case all 16bits used THD of .002% which equals a distortion attenuation of about -94dB.
Now degrade this distortion attenuation by about 60dB which roughly equals subtracting 10bits from the 16 bits available to describe the waveform. Now you have a distortion attenuation of about -30dB which equals about 3%THD and there you are. Each bit equals 6dB and a fraction more, which is how you get about 30dB of attenuation instead 34dB or so. If it was exactly 6dB you would have 2% THD which twice the industry maximum THD commonly accepted as the cut off point in amplifier measurements. If you go down to the 16bit dynamic limit you have zero bits left to describe the waveform  and 100% distortion of the waveform.
Scotty

Well, let's measure it!   :D

http://www.hi-fiworld.co.uk/index.php/cd-dvd-blu-ray/62-cd-reviews/189-musical-fidelity-a1-cd.html?showall=1

0.21% at -60dB, complete with a snapshot of the analyser's calculation at -60dB, and Noel Keywood says "not class leading figures but very good all the same".

I see similar figures all the time. I think it is fair enough to accept 0.22% at -60dB as typical.
« Last Edit: 22 Oct 2012, 09:25 pm by tnargs »

medium jim

Re: "first generation cd's all sounded terrible"
« Reply #98 on: 22 Oct 2012, 05:25 am »
Yep.

More bits and a higher sample rate are good for an initial recording, mixing, processing, etc... but normalizing that down to 16/44.1 is just fine! :)

+1

The mastering is what is critical!

Jim

James Lehman

  • Jr. Member
  • Posts: 60
Re: "first generation cd's all sounded terrible"
« Reply #99 on: 22 Oct 2012, 06:37 am »
If you look into the inner workings of some of the audio processing apps like Audacity and many others, it's done in 32-bit float and can be exported as 16-bit or whatever.