The Problem With A/B Tests

0 Members and 1 Guest are viewing this topic. Read 1444 times.

Waker

The Problem With A/B Tests
« on: 12 Feb 2009, 09:04 am »
After again reading Frank's report of his double blind A/B comparison of MIT cables and Home Depot zip cord, I am obliged to point out some unavoidable drawbacks to this method.  Test subjects are asked to listen carefully for differences in sound quality in a short time period, in an unfamiliar setting, listening to gear which they are not sonically accustomed to--and even to music that they are not familiar with.  You might feel that these variables should make no difference, as long as the controlled variable is fairly presented, but let's just imagine how we all listen in our own homes compared with listening in a retail setting or at an exposition.  If you really want to evaluate any new piece of equipment, I suggest there is no better environment than in your own listening room, on your own system, taking plenty of unpressured time listening to favorite music you have heard many times over the years.  This means taking something home and auditioning it for three or more days and listening to music you know and love well and that you wish sounded better in some ways.  Then you will realize what an upgrade of any kind will or will not do for you.       

jrtrent

  • Jr. Member
  • Posts: 130
Re: The Problem With A/B Tests
« Reply #1 on: 12 Feb 2009, 01:26 pm »
The A/B test is designed to help us hear sonic differences between components.  I find it works best to use passages of 30 seconds or less so as to keep what I'm hearing fresh in my memory when making the comparison.  I might repeat this several times, attending to a different aspect of the performance at each trial.  It does not take days to decide if one component is better than another.  I agree that if you're looking to upgrade an existing system, then it makes sense to arrange an in-home demonstration or to take as many pieces of your own gear to the shop as possible (I've taken my entire system to the shop on more than one occasion).  I'm also still a believer in the concept of single-speaker demonsration facilities; that is, audio rooms in which only one pair of speakers at a time is present (the idea being that additional, undriven speakers in the room will vibrate sympathetically with the original sound source, making even good systems sound confused and masking low-level detail).

Sonic differences are one thing, and living with a piece of gear is another.  It might take a few days to determine if the operational features, handling, and even appearance are going to end up pleasing or annoying you.

turkey

  • Full Member
  • Posts: 1888
Re: The Problem With A/B Tests
« Reply #2 on: 12 Feb 2009, 01:29 pm »
The A/B test is designed to help us hear sonic differences between components.  I find it works best to use passages of 30 seconds or less so as to keep what I'm hearing fresh in my memory when making the comparison.  I might repeat this several times, attending to a different aspect of the performance at each trial.  It does not take days to decide if one component is better than another.  I agree that if you're looking to upgrade an existing system, then it makes sense to arrange an in-home demonstration or to take as many pieces of your own gear to the shop as possible (I've taken my entire system to the shop on more than one occasion).  I'm also still a believer in the concept of single-speaker demonsration facilities; that is, audio rooms in which only one pair of speakers at a time is present (the idea being that additional, undriven speakers in the room will vibrate sympathetically with the original sound source, making even good systems sound confused and masking low-level detail).

Sonic differences are one thing, and living with a piece of gear is another.  It might take a few days to determine if the operational features, handling, and even appearance are going to end up pleasing or annoying you.


I agree with you completely.

Art_Chicago

Re: The Problem With A/B Tests
« Reply #3 on: 12 Feb 2009, 04:13 pm »
After again reading Frank's report of his double blind A/B comparison of MIT cables and Home Depot zip cord, I am obliged to point out some unavoidable drawbacks to this method.  Test subjects are asked to listen carefully for differences in sound quality in a short time period, in an unfamiliar setting, listening to gear which they are not sonically accustomed to--and even to music that they are not familiar with.  You might feel that these variables should make no difference, as long as the controlled variable is fairly presented, but let's just imagine how we all listen in our own homes compared with listening in a retail setting or at an exposition.  If you really want to evaluate any new piece of equipment, I suggest there is no better environment than in your own listening room, on your own system, taking plenty of unpressured time listening to favorite music you have heard many times over the years.  This means taking something home and auditioning it for three or more days and listening to music you know and love well and that you wish sounded better in some ways.  Then you will realize what an upgrade of any kind will or will not do for you.      

I tend to agree with you in general, but Frank and Jim were quite familiar with the equipment and the room as they did the test at the end of the show. As I understood, their opinions splitted the same way as the opinions of the MIT guys, who perhaps did not have a lot of experience with AVA-Salk setup.

S Clark

  • Full Member
  • Posts: 7368
  • a riot is the language of the unheard- Dr. King
Re: The Problem With A/B Tests
« Reply #4 on: 12 Feb 2009, 08:09 pm »
I don't agree.  In this case, the test was to see if there were differences of significance.  If a difference is not subtle, an A/B test should quickly uncover it.