0 Members and 1 Guest are viewing this topic. Read 41703 times.
Warm up is the process where the device as a whole reaches its average operating temperature. Typical warm up time is around a hour. It takes time for heat to spread from components which generate it to ones that do not. When a components heats up, there are changes in how it conducts electricity. These changes disappear once the device cools back down. Nearly all electronic components change properties after they heat up. There can be audible changes after an amp/preamp/etc.. heats up.
Ethan, you are just throwing numbers around with the change in distortion vs.the operating temperature of a SS amp
the figures you cited represent a 100% change from .002 to .001.
Lets suppose the SS amp starts at 0.1% THD and falls to 0.05% I would willing to bet this would be audible.
At Crown our published specifications are guaranteed for three years. Further, because our "in-house" specs are more stringent than our published specs, every Crown amplifier will exceed its published specs.
Plus in addition to this simplistic view of distortion the amplifiers IM distortion behavior when it is below its normal operating temperature has not been considered, nor has its behavior with regards to transient signals been examined.
SS amplifiers are frequently not entirely stable before reaching their normal operating temperature.
the fact remains that the main reason some people perceive a change in sound is due to their own changing perception.
We can hypothesize all day long about what changes occur inside an amplifier as its temperature rises. But if those changes are never actually audible, it's just an exercise in mental masturbation.
I have had electrolytic caps used for coupling caps to block the DC on the output of a single ended class A buffer take a long time to break in and fully form. The voltage biasing them was 2volts and the ac signal was less than that. Four hundred hours is about two weeks time and I am certain it took at least that long in my case for the cap to stop changing. I would say the time for break in to occur would depend on the cap construction and applied voltage. I have no problem accepting that in some cases people may be able to hear changes in a caps impedance curve after voltage has been applied for some period of time.Scotty
I agree it sounds like a pain in the butt. I tested three electrolytic caps a BlackGate,Ruycon ZL and a Panasonic FM.There was a clear progression towards a cleaner window,with the FM cap being the clearest view of the performance.The differences in sound between the caps were not subtle.Scotty
Interesting. Subtle or not there's a number of flaws with your methodology.A subjective perceptual test in which the size of the subject pool is one is not statistically valid, and the test itself lacks the proper controls. So your test really doesn't prove anything meaningful. Put a control group in, and increase the size of the subject pool, stick to a scientific protocol, obtain the same results, and then you may be onto something. This is where these discussions always wind up, with neither side budging. I mean c'mon folks, it isn't like this is the first time that this topic has been brought up in the past 20 or so years.In response to Ethan, the reason that no one has been able to point to scientific research that follows the proper protocols with the required controls is because there isn't any. I don't know why that's the case. Perhaps the questions that are of considerable interest to audiophools are not substantial enough to merit the attention of independent researchers. Or perhaps the answer to these questions really doesn't benefit society at large, and with limited resources there are more important questions that need to be answered. And perhaps it is because audio manufacturers are reluctant to help fund independent studies...because while they could win they stand to loose, and loose huge.I don't know that audible break-in does or does not occur. I believe that in some devices, such as speakers and phono cartridges, it does...and in other devices, such as amplifiers, it does not. Earlier this year I spent several thousand dollars on a new speaker system, and initially I was very unhappy with their sound. I thought the bass response was too boomy and the highs were a little too bright. I did not have a lot of placement flexibility but the vendor did work with me to try to tune the ports and the compression driver. I was having a tough time with it. The vendor offered to take the speakers back if I wasn't happy, but I really wanted to give this more time and a fair chance to work. Eventually the sound of the speakers began to improve on their own. I don't recall how many hours were on the drivers at that time but if I had to guess I would say it was somewhere around 150 hours. Did the performance of the speakers change because of break-in? I think it did but I really don't know that for sure. I do know that I am now very happy with my speakers and I think they sound great. But for all I know what took place over those 150 hours was that the speakers didn't really break in, the listener did, and all that really happened was that I became acclimated to their sound.--Jerome
The reason I chose those two cable geometries is that they are in common usage in the audiophile community and people within that population would agree that they sound different from one another. What defines an abnormal geometry when it comes to audio ICs is something I can't answer. The pundits say that DC to light bandwidth is not necessary in a cable carrying audio band signals. I think the two examples I gave would fall into the realm of adequate for audio signal transmission.
I would think the chance to learn something new about yourself and your system would be compelling enough to get you try the experiment.
The construction phase of the cables would take longer than the listening test session. You stick the cables in and you hear a difference or you don't. The differences between the two should easily be gross enough in nature that no SBT or DBT testing is necessary.
If you are not interested in the cable geometry experiment that's cool,frankly I suspect that if you do the break in test with couple a Shack patch cords you may not hear any difference. You may have a chance if you system resolution has
Well, when you compare crap to crap, what else would you expect?
Good point, face. A few posters here who come from a certain owner's circle feel that the point of this hobby is to spend as little as possible and then to sneer at those who would rather make an investment in truly high-end gear.
In general to all, the danger of pushing "audio perception" testing to the for front above measurements is that it is an inexact science, and companies can simply pick and choose which tests to use for marketing purposes. It is also well known that audio testing is easily manipulated to a conclusion of no sonic difference (especially by those associated with big business/inexpensive companies for marketing purposes/public opinion manipulation).This allows big business and/or low cost manufacturers to falsely claim "all amplifiers/preamplifiers sound the same" which of course is absurd due to different designs, different parts/quality affecting the sound etc.
Bruce Brisson of MIT has something to say about this in his recent Dagogo interview:http://www.dagogo.com/View-Article.asp?hArticle=793