+24 NO MORE

0 Members and 1 Guest are viewing this topic. Read 2519 times.

Roger A. Modjeski

+24 NO MORE
« on: 17 Nov 2012, 06:13 pm »
In building a mic preamp for professional use and serious home studios I have had the opportunity to speak with several recording engineers and done a fair amount of reading on the internet. It is clear to me that there is a great deal of confusion about the maximum level that a mic preamp or line preamp has to attain to be respectable. My current research leads me to believe that recording engineers still want +24 dBu levels which were desired in the days of analog tape but which make no sense in the current studio where the tape recorder has been replaced by digital recording via a A/D converter and computer. It appears that commercial A/D converters achieve full scale at +4 dBu or -10 dBu depending on settings. Given this there is no need to build to +24 dBu, hense the title of this post and an article I plan to write when I gather all the facts.

One more point that intrigues me for my article is the historical reason for establishing the practice of +24. Every recording engineer I speak to quotes this but I have not found one who can explain why. I believe I am close to an explanation that will lead us back before the year 1900 and the telegraph/telephone industry. I would be grateful for any information or links to information that considers this topic.

As a teacher of electronics I feel that understanding the historical context surrounding the establishment of certain practices, in this case the choice of 600 ohm balanced lines, the power/voltage level for 0 dBm (.775 volts= 1 Milliwatt into 600 ohms) and the studio practice of +24 dB levels will be very helpful in understanding why we needed +24 levels and why we no longer need them. We often have one foot in the past and one in the present as we travel the road of technology.

p.s. Because the differences between dBv, dBu and dBm are so small compared to +24 I will use them in their usual context.  In brief, dBv is referenced to 1 volt and preferred by electronics engineers because we like things that start at one. :) dBm is one milliwatt into 600 ohms which is what the telephone company established in 1928. That works out to .775 volts. dBu is used by studios where the 600 ohm line is typically unloaded and the u stands for "unloaded", the voltage being the same .775 volts but into a high impedance load of 10K ohms or greater. When loading is not an issue dBm and dBu are exactly the same voltage. dBv is about 2 dB higher but rarely used in the recording world.

mkpratton

  • Newbie
  • Posts: 1
Re: +24 NO MORE
« Reply #1 on: 20 Nov 2012, 06:26 pm »
The +24 is very old school , sometimes used as the spec needed to drive a tube type broadcast transmitter directly .
the higher level helped very much to overcome all the
background hum and radio interference from the big AM transmitter in the same room .

I think you’re on the correct path -10 to +4 would be fine for a preamp output
 to most mixer or sound card inputs you’re going to find now days.

a good preamp makes all the difference in studio recordings ,
I use a very large BMX3 mixer just for a mic preamp in my studio for KGIG radio
 and would be happy to test your product anytime.
 :D

Brad@valleymedia.org
( using Milton's account )

evsentry3

  • Jr. Member
  • Posts: 19
Re: +24 NO MORE
« Reply #2 on: 23 Nov 2012, 12:50 am »
The way I would answer the question of why +24 would be like this....

The dampened ballistic of the VU meter meant it was commonly accepted that you needed another 10db for peak level over VU's 0.  So when referenced to +4, it meant you needed +14 already.  Plus, you needed to accomodate sudden loud passages before the hand could grab the knob to turn it down.  So the "extra" 10 was to cover the normal use of keeping the needle just barely below the displayed +3, plus the loud unexpected.  No self respecting jock ever lets peaks be less than the indicated zero (keep it in the red!) and to hear the slap of the needle against the peg repeated was no cause for concern...nor to break the phone connection with the lastest female caller!

Now throw in that actually it was common to use +8 as referenced to 0db, rather than +4, and your starting to see why some equipment was built to pass +27 before clipping.  Not sure why I remember seeing it as +27 and not 28?

Can't tell you how many times I tried to explain to the blank expressioned telco man why a 0db broadcast loop really needed to pass +10 before clipping or it wasn't good for me as a 0db link!

Today, with digital sourced material, the average VU to peak is considered like +16 to 18 over reference.  Not that a mic pre needs to consider that part.

A link that I ran across a couple days ago, while written a long time ago for a different purpose, but talks to mic pre output levels might be of interest to you if you haven't seen it....

http://www.dwfearn.com/tubes_vs_transistors.html

EV3


/mp

  • Jr. Member
  • Posts: 240
Re: +24 NO MORE
« Reply #3 on: 23 Nov 2012, 01:36 am »
EV3's explanation sounds like another typical case of, "Mine goes to 11."

Happy Thanksgiving,
/mp

kevin360

  • Full Member
  • Posts: 758
  • án sǫngr ek svelta
Re: +24 NO MORE
« Reply #4 on: 23 Nov 2012, 02:36 am »
I feel that understanding the historical context surrounding the establishment of certain practices...

...is absolutely vital for the judicious management of standards and practices. Give that dog a bone! :D

This is yet another of your threads that I'll follow with great curiosity.

Thanks.

airhead

Re: +24 NO MORE
« Reply #5 on: 28 Nov 2012, 01:29 am »
This is all way over my head, but I thought I would ask Jim Anderson, whom I regard as one of the greatest recording engineers of recent times.  (He did the best of Patricia Barber's recordings.)
He didn't answer directly, but passed the question to a friend, who answered as follows, in favor of the +24 guidelines (I think!) .    I post with their permission.

Jim;
Regarding the concerns of Arthur Ogus about Mr. Roger Modjeski's question that was posted on AudioCircle (http://www.audiocircle.com/index.php?PHPSESSID=g3cdpn6edidnuker3m7590gea0&topic=111645.0), here are my thoughts. You may forward these comments to anyone. I'm going into greater detail than you need, but someone else may need it or appreciate it (or disagree with it):
From Jim Anderson:
I always thought the +24 was about headroom and was very necessary and frankly thought +32 was preferred.
As Mr. Modjeski explains, there are specific reference levels such as "0 dBu" (0.775 volts). "Headroom" must be expressed in relationship to a specific reference level, otherwise it is meaningless (Less McCann: "Tryin' to make it real — compared to what?").
What is headroom? One way to describe it is to specify a signal level which becomes the reference level, then determine how much higher the signal level can get before the signal becomes unacceptably distorted. The difference between the two levels is the headroom.
To put "headroom" into some context: For many decades there has been a traditional "operating level" of "+4 dBu" or "+4 dBm" (same voltage level but different loads being specified). This is 4 dB above the "0 dBu" or "0 dBm" reference level (0.775 volts). I am not a historian, and cannot speak for the folks at Bell Labs or anywhere else. But they likely chose a standard operating level that was high enough to be well above the residual broadband noise of the typical circuitry of the day, yet low enough that there was sufficient room above that level to handle higher signal levels caused by the dynamics of program material and unexpected peaks without excessive distortion. Enter the engineer, with the knowledge and experience to adjust levels during a performance to keep them in the sweet spot. The engineer's job was somewhat challenging because the mechanical VU meters of the day were "averaging" devices, incapable of responding quickly enough to indicate the true levels of transients.
I have always calibrated the meters on my M-1 mic preamp so that the "0VU" point on the LED meter scale (the first 15 LEDs illuminated) represents an output level of +4 dBu in either the PEAK mode or the VU mode. The meter is indicating the output level of the preamp. This is how the old-fashioned mechanical VU meters were typically calibrated (0VU = +4 dBm). I specify the maximum output level of the M-1 as +24 dBu (it is actually capable of a maximum output level of +25.8 dBu under normal circumstances, but I give it a conservative rating of +24 dBu), so there is 20 dB of "headroom" beyond the standard operating level of +4 dBu. There would also be 24 dB of headroom beyond the reference level of 0 dBu. The "PK" LED is normally calibrated to illuminate when the output level reaches +22 dBu, giving you 2 dB of warning before you reach the maximum output level of the M-1 (+24 dBu). Therefore there is 2 dB of headroom between the point at which the "PK" LED illuminates (+22 dBu) and the maximum output level (+24 dBu). If you use the maximum output level specification of +25.8 dBu, you have 3.8 dB of headroom beyond the point at which the "PK" LED illuminates (+22 dBu) and the actual clipping point of the signal (+25.8 dBu). Similarly, you have 25.8 dB of headroom beyond the 0 dBu reference level, and 21.8 dB of headroom beyond the +4 dBu operating level. So there are several definitions of "headroom" given here, each one properly defined by specifying a reference level to which the higher level is compared.
Quoting Mr. Modjeski:
My current research leads me to believe that recording engineers still want +24 dBu levels which were desired in the days of analog tape but which make no sense in the current studio where the tape recorder has been replaced by digital recording via a A/D converter and computer. It appears that commercial A/D converters achieve full scale at +4 dBu or -10 dBu depending on settings. Given this there is no need to build to +24 dBu, hense the title of this post and an article I plan to write when I gather all the facts.
I don't make A/D converters, so I can't comment with great experience regarding their optimum operating levels. I'm sure that there are many different topologies and approaches. Some A/D converters can accept signal levels as high as +24 dBu at their inputs without causing distortion during the A/D process. The A/D converter may even provide a higher S/N ratio when receiving a higher input level than a lower input level. A well-designed A/D converter should be able to deal with a +24 dBu input level without distorting, though it may require the ENGINEER to make the appropriate adjustment to the internal gain of the A/D analog circuitry. Each situation will be different, and each engineer or studio will have to make their individual choices on optimum operating levels. I would suspect that an A/D converter that reaches full-scale with a -10 dBu (-10 dBv?) input level would have a less-than-optimum S/N ratio, simply because a signal level of -10 dBu is relatively low, and the noise floor of the analog circuitry cannot magically be lowered to compensate.
There will be situations where a particular combination of performer, microphone and mic preamp will result in output levels from the preamp that are well above +4 dBu, perhaps approaching +24 dBu, even when the gain controls of the preamp are set to the minimum gain setting. Yet the output level will not exceed the traditional "maximum" level of +24 dBu. You can't tell the artist to sing or play more softly or to back away from the microphone because you can't exceed the maximum input level of +4 dBu for a particular A/D converter. Your A/D converter must be able to deal with these real-world situations of high input levels.
Many years ago I had a customer that had an A/D converter (that shall remain nameless) that had power supplies of +/-5VDC for the analog input stage. Input levels beyond about +8 dBu would cause clipping. Even worse, there was a volume control AFTER the analog input circuitry, and some type of meter AFTER the volume control. When the output of the M-1 exceeded +8 dBu, the input stage of the A/D converter started clipping. But the volume control of the A/D was turned down so that the meter of the A/D said that the level was OK. Incredibly dumb design. Perhaps the A/D was designed for people to convert their vinyl record collection to digital. I don't know.
This particular A/D converter was capable of handling input levels at least a few dB greater than the +4 dBu level referred to by Mr. Modjeski. But it was woefully inadequate in the real world. Each engineer will have to make his or her own decision on how much headroom is enough for their situation. But, all else being equal, more headroom is better. Sooner or later, it will save the day. It costs more to provide more headroom. Power supplies must provide higher voltages. Transistors must be rated for higher voltages. Capacitors must have higher voltage ratings and they will be physically larger as a result. Power transformers will have to be larger, etc. The "-10" reference level was likely the result of attempts to lower the costs of audio equipment while still providing performance that was "good enough" for the typical homeowner or beginner or whatever. But when you make your living making recordings where reliability is essential, and you need to do everything that you can to avoid clipping, the additional cost of headroom is worth it. The tools that we build should make the engineer's job easier, not harder. Signal levels that are far above +4 dBu, even approaching +24dBu, are real, common, and they are not going away.
Thank you.
John Hardy
The John Hardy Company
www.johnhardyco.com

Roger A. Modjeski

Re: +24 NO MORE
« Reply #6 on: 2 Dec 2012, 01:53 am »

Now throw in that actually it was common to use +8 as referenced to 0db, rather than +4, and your starting to see why some equipment was built to pass +27 before clipping.  Not sure why I remember seeing it as +27 and not 28?

A link that I ran across a couple days ago, while written a long time ago for a different purpose, but talks to mic pre output levels might be of interest to you if you haven't seen it....

http://www.dwfearn.com/tubes_vs_transistors.html

EV3

In reading some AES Journals from the 1960's I see references to putting +8 dBm on long distance lines (studio to member station feeds I presume).

I read the DW Fearn article. The simple solution would be to make mic preamps with lower gain. Lowering the gain is much simpler than raising the output level.  Perhaps recording engineers solved the high input level problem with higher output preamps with their output level turned down. It makes much more sense to reduce the input signal and keep the output level full up. Of course that requires a quiet preamp. Perhaps preamp designers were better at raising level than they were at reducing noise.

This quote from Fern I find interesting.

" While the latest console preamplifiers have less noise, less distortion, and more knobs than ever before, they are not designed to handle this kind of input level. In most commercially available preamplifiers, head room runs on the order of +20 dBm, and gain is commonly set at 40 dB. With these basic parameters it is clear from the data shown in Tables I and II that severe overloads can occur on peaks from almost all instruments."

The table above this article states that high output mics like the U 87 will output .775 volt 6 inches from a bass drum head. Putting a gain of 40 db on that is just rediculous. You are asking the preamp to put out 77.5 volts.  Either pad down the mic or reduce the gain!! :roll:

My preamp will have a wide range of gain settings along with input pads and transformer taps so that it can handle large signals. Thanks for the article.
« Last Edit: 4 Dec 2012, 04:11 am by Roger A. Modjeski »

Roger A. Modjeski

Re: +24 NO MORE
« Reply #7 on: 2 Dec 2012, 02:36 am »
This is all way over my head, but I thought I would ask Jim Anderson, whom I regard as one of the greatest recording engineers of recent times.  (He did the best of Patricia Barber's recordings.)
He didn't answer directly, but passed the question to a friend, who answered as follows, in favor of the +24 guidelines (I think!) .    I post with their permission.

Jim;
Regarding the concerns of Arthur Ogus about Mr. Roger Modjeski's question that was posted on AudioCircle (http://www.audiocircle.com/index.php?PHPSESSID=g3cdpn6edidnuker3m7590gea0&topic=111645.0), here are my thoughts. You may forward these comments to anyone. I'm going into greater detail than you need, but someone else may need it or appreciate it (or disagree with it):
From Jim Anderson:
I always thought the +24 was about headroom and was very necessary and frankly thought +32 was preferred.
As Mr. Modjeski explains, there are specific reference levels such as "0 dBu" (0.775 volts). "Headroom" must be expressed in relationship to a specific reference level, otherwise it is meaningless (Less McCann: "Tryin' to make it real — compared to what?").
What is headroom? One way to describe it is to specify a signal level which becomes the reference level, then determine how much higher the signal level can get before the signal becomes unacceptably distorted. The difference between the two levels is the headroom.
To put "headroom" into some context: For many decades there has been a traditional "operating level" of "+4 dBu" or "+4 dBm" (same voltage level but different loads being specified). This is 4 dB above the "0 dBu" or "0 dBm" reference level (0.775 volts). I am not a historian, and cannot speak for the folks at Bell Labs or anywhere else. But they likely chose a standard operating level that was high enough to be well above the residual broadband noise of the typical circuitry of the day, yet low enough that there was sufficient room above that level to handle higher signal levels caused by the dynamics of program material and unexpected peaks without excessive distortion. Enter the engineer, with the knowledge and experience to adjust levels during a performance to keep them in the sweet spot. The engineer's job was somewhat challenging because the mechanical VU meters of the day were "averaging" devices, incapable of responding quickly enough to indicate the true levels of transients.
I have always calibrated the meters on my M-1 mic preamp so that the "0VU" point on the LED meter scale (the first 15 LEDs illuminated) represents an output level of +4 dBu in either the PEAK mode or the VU mode. The meter is indicating the output level of the preamp. This is how the old-fashioned mechanical VU meters were typically calibrated (0VU = +4 dBm). I specify the maximum output level of the M-1 as +24 dBu (it is actually capable of a maximum output level of +25.8 dBu under normal circumstances, but I give it a conservative rating of +24 dBu), so there is 20 dB of "headroom" beyond the standard operating level of +4 dBu. There would also be 24 dB of headroom beyond the reference level of 0 dBu. The "PK" LED is normally calibrated to illuminate when the output level reaches +22 dBu, giving you 2 dB of warning before you reach the maximum output level of the M-1 (+24 dBu). Therefore there is 2 dB of headroom between the point at which the "PK" LED illuminates (+22 dBu) and the maximum output level (+24 dBu). If you use the maximum output level specification of +25.8 dBu, you have 3.8 dB of headroom beyond the point at which the "PK" LED illuminates (+22 dBu) and the actual clipping point of the signal (+25.8 dBu). Similarly, you have 25.8 dB of headroom beyond the 0 dBu reference level, and 21.8 dB of headroom beyond the +4 dBu operating level. So there are several definitions of "headroom" given here, each one properly defined by specifying a reference level to which the higher level is compared.
Quoting Mr. Modjeski:
My current research leads me to believe that recording engineers still want +24 dBu levels which were desired in the days of analog tape but which make no sense in the current studio where the tape recorder has been replaced by digital recording via a A/D converter and computer. It appears that commercial A/D converters achieve full scale at +4 dBu or -10 dBu depending on settings. Given this there is no need to build to +24 dBu, hense the title of this post and an article I plan to write when I gather all the facts.
I don't make A/D converters, so I can't comment with great experience regarding their optimum operating levels. I'm sure that there are many different topologies and approaches. Some A/D converters can accept signal levels as high as +24 dBu at their inputs without causing distortion during the A/D process. The A/D converter may even provide a higher S/N ratio when receiving a higher input level than a lower input level. A well-designed A/D converter should be able to deal with a +24 dBu input level without distorting, though it may require the ENGINEER to make the appropriate adjustment to the internal gain of the A/D analog circuitry. Each situation will be different, and each engineer or studio will have to make their individual choices on optimum operating levels. I would suspect that an A/D converter that reaches full-scale with a -10 dBu (-10 dBv?) input level would have a less-than-optimum S/N ratio, simply because a signal level of -10 dBu is relatively low, and the noise floor of the analog circuitry cannot magically be lowered to compensate.
There will be situations where a particular combination of performer, microphone and mic preamp will result in output levels from the preamp that are well above +4 dBu, perhaps approaching +24 dBu, even when the gain controls of the preamp are set to the minimum gain setting. Yet the output level will not exceed the traditional "maximum" level of +24 dBu. You can't tell the artist to sing or play more softly or to back away from the microphone because you can't exceed the maximum input level of +4 dBu for a particular A/D converter. Your A/D converter must be able to deal with these real-world situations of high input levels.
Many years ago I had a customer that had an A/D converter (that shall remain nameless) that had power supplies of +/-5VDC for the analog input stage. Input levels beyond about +8 dBu would cause clipping. Even worse, there was a volume control AFTER the analog input circuitry, and some type of meter AFTER the volume control. When the output of the M-1 exceeded +8 dBu, the input stage of the A/D converter started clipping. But the volume control of the A/D was turned down so that the meter of the A/D said that the level was OK. Incredibly dumb design. Perhaps the A/D was designed for people to convert their vinyl record collection to digital. I don't know.
This particular A/D converter was capable of handling input levels at least a few dB greater than the +4 dBu level referred to by Mr. Modjeski. But it was woefully inadequate in the real world. Each engineer will have to make his or her own decision on how much headroom is enough for their situation. But, all else being equal, more headroom is better. Sooner or later, it will save the day. It costs more to provide more headroom. Power supplies must provide higher voltages. Transistors must be rated for higher voltages. Capacitors must have higher voltage ratings and they will be physically larger as a result. Power transformers will have to be larger, etc. The "-10" reference level was likely the result of attempts to lower the costs of audio equipment while still providing performance that was "good enough" for the typical homeowner or beginner or whatever. But when you make your living making recordings where reliability is essential, and you need to do everything that you can to avoid clipping, the additional cost of headroom is worth it. The tools that we build should make the engineer's job easier, not harder. Signal levels that are far above +4 dBu, even approaching +24dBu, are real, common, and they are not going away.
Thank you.
John Hardy
The John Hardy Company
www.johnhardyco.com

Arthur,

Thanks for the research. I have read over Mr Hardy's web site information and I find some things he says rather curious. He appears to want to keep the +24 (and above) standard going, perhaps because he is very proud that his equipment can do it. However I am interested in making equipment based on reason rather than custom and am very delighted that professional equipment such as the Digidesign hardware and Pro Tools software from AVID has adopted the +4 dBu standard: http://www.avid.com/US/resources/digi-orientation. I understand from many in the field that this is top of the line equipment.

It may be a little unclear where I am being quoted and the responder is being quoted. However the quote below I do not agree with. I believe I am quoting Mr Harvey.

"I don't make A/D converters, so I can't comment with great experience regarding their optimum operating levels. I'm sure that there are many different topologies and approaches. Some A/D converters can accept signal levels as high as +24 dBu at their inputs without causing distortion during the A/D process. The A/D converter may even provide a higher S/N ratio when receiving a higher input level than a lower input level. A well-designed A/D converter should be able to deal with a +24 dBu input level without distorting, though it may require the ENGINEER to make the appropriate adjustment to the internal gain of the A/D analog circuitry."

I don't know what AD converters accept +24 dBu but the only reason I would imagine one would is to make recording engineers with the big + 24 dBu stick happy. The ENGINEER can simply put a 20 dB attenuator in front of the +4 dB input, thus allowing the operator to continue his practice of pegging meters and getting the distortion he is accustomed to from his preamps and other boxes. If this part of his "magic" then he must continue on that road. I am considering putting some "distortion" device in the preamp but would rather not, though I hear that modern recording engineers are looking for ways to distort their preamps.

I looked at the data sheets of some popular converters. When you get to the actual converter chip the voltage levels are quite specific and down around a few volts, not 12 volts (+24 dbu) therefore when the signal reaches the converter chip it has to be at that level no matter what level it came in at. If it comes in at +24 there will have to be 20 dB of attenuation done by resistors or it will overload the converter chip. This being the case it matters little what level the signal comes in at be it +24, +4 or -10. If it comes in at +4 virtually no extra gain is needed. If it comes in at -10 a gain of 3x amplifier is needed. If the guy designing the box can't make a low noise 3x (10 dB) line amp he should quit his job. If the studio designer needs +24 dBu to get over the noise in his studio setup, perhaps he should work on this studio a bit more.