0 Members and 1 Guest are viewing this topic. Read 25510 times.
I'll restate what I attempted to say earlier for clarification:
It is possible to pick the difference between different musical instruments by ear with great reliability. It's easy to verify that we get it right here [unlike with say a capacitor] cos you can simply look at the physical object and there's no question as to what it is.
You appear to be suggesting above [correct me if I'm wrong ] that measuring equipment might be unable to pick such simple matters as the difference between instruments.
I don't know if this is true or not and I don't know the limitations of measuring equipment. Dan is quite certain that it can be done and maybe he's right. Still, I don't know the limitations of the equipment here and "voice recognition" to name one tool he backs still seems to have a long way to go. How would it go picking a single instrument out of the midst of an orchestra for example?
The significance of the "what instrument is playing" issue seems to me to be that it's a good test in that there's no question that instruments are different and it offers a good base to compare the ability of the ear to pick a difference as against the ability of a piece of measuring gear to do the same [yeah, yeah .. it can't be "the same" ] but you get my point?
As far as the pre amp example that Danny gave, that's out of phase crosstalk causing the effect.
What I'm saying is that it's trivially easy to measure differences between two instruments. Identifying a particular instrument goes beyond measurement and on to interpretation. And what I don't understand is what is the particular relevance of identifying a particular instrument as it relates to the subject at hand?
Just out of curiosity, does anybody make a mono bloc configuration pre-amp? You know, two separate units, so there's no possibility of crosstalk?
ah, well if you are saying that it is trivially easy to measure the difference between two different instruments, Then I have misunderstood the earlier statement I quoted.
As for your measurement/interpretation point. I hear what you're saying but I don't agree
Not a bad discussion though ... thanks
Perhaps what he is getting at here Steve, is that humans can easily identify what instrument is producing a note, but electronic testing instruments cannot, or cannot as of now.
Man, you guys are cyborg argument mongers.
Hey guys,Loving this thread. Really. QuoteAs far as the pre amp example that Danny gave, that's out of phase crosstalk causing the effect. Very interesting. Sounds very plausible as to why one pre amp would have better soundstaging than another.Just out of curiosity, does anybody make a mono bloc configuration pre-amp? You know, two separate units, so there's no possibility of crosstalk?
Quote from: Daygloworange on 8 Feb 2007, 03:18 amPerhaps what he is getting at here Steve, is that humans can easily identify what instrument is producing a note, but electronic testing instruments cannot, or cannot as of now.Yes, I know what he's getting at. What I'm getting at is that I don't see what relevance that has.
Again, perhaps what he might be getting at, or at least, what I'm getting at , is that humans can hear between certain component swaps better imaging where they might be able to hear the 1st and 3rd violins more clearly in the soundstage, but would you be able to measure the differences with a piece of equipment commonly used to quantify electronic design.Some components when A/B'd have a similar "overall sound" but differ in that some have a more forward presentation, while others will have a more laid back presentation. Some have better separation between individual instruments in the soundstage, while others have a soft aura around each player/instrument, as opposed to sharp delineations.So the relevance would be that, are current electronic measuring instruments( or software algorithms) able to clearly plot, somehow, the factors involved in making these types of different sonic presentations. Could they be able to show how the localization cues are different between the those two types of sonic presentations .
Plenty of people report very similar findings to my examples to describe the differences they hear when A/B'ing components.
As it stands, this CLC gizmo is what I would consider a faith based talisman, relying on the holistic faith of the end user. That is very different than a tweek. It is not connected to the system in any way, shape or form than I can easily see. It's claimed effect is on the test subject. This is a paranormal study, not an audio one. The fact that this thing won an Positive Feedback award for audio is amazing. If it actually works, it should have been submitted for a Nobel prize, it would be a lot more useful in other uses than to such a small demographic as "audiophiles"The inventers of this thing should probably have a better person in charge of their marketing...
Ethan, are you saying the only two parameters of quality sound reproduction are frequency response and distortion?
Yes. But the first order of business would be to establish that what they're perceiving is actually due to the components. Because if it's not, then you could end up wasting a hell of a lot of time and effort trying to chase down a phantom.
Would you care to volunteer? Who knows? With enough effort, you might be able to get to meet the King of Sweden someday.
John,> Can anyone look at these measurements and tell me what instruments and people are on the recording? <No, but that's irrelevant to this discussion! What you are asking about is basically artificial intelligence. What I'm talking about is assessing audio equipment to determine if it changes the sound passing through it. It doesn't matter if the singer is John, Paul, George, or Ringo. Measurements absolutely can tell if a piece of gear (or your room) is changing the sound in any way that is audible.--Ethan