0 Members and 1 Guest are viewing this topic. Read 3080 times.
The people that say that were confused by Kunchur. What you can hear is if one channel of a mid-range sound is suddenly delayed by 10 us, the sound will seem to move laterally a small amount. It's about one channel time delay, nothing to do with rise time.
Thank you for your query about my papers on auditory temporal (time) resolution in humans (posted on my web site: http://www.physics.sc.edu/kunchur/Acoustics-papers.htm) and for forwarding the forum comments to me. I would like to respond to some of the assertions and comments that were presented. First of all, an internet forum is a dangerous place to obtain information -- instead one should go to an authentic original source such as a published scientific paper in a refereed journal. On an internet forum, a writer can post completely arbitrary, unproven, and indeed totally wrong statements with no backing or oversight whatsoever. Normally this would be a laughing matter, except that sometimes people obtain their "education" through such forums and this can therefore cause further harm to the scientific understanding held by the general population, which is already in a national crisis.In science, assertions must be properly backed up and verified. I don't know who made up this nonsense of dividing the sampling period by the vertical bits to obtain a temporal resolution. The bits give the shades of intensity (related to sound pressure level) that can be differentiated, whereas the sampling period gives the frequency at which the information about these levels is updated. They have no direct connection! In digital photography, the angular image resolution is governed by the number of pixels of a digital camera sensor, whereas the shades light intensity that can be discriminated is governed by the number of bits (about 14 bits in current digital SLRs). If you do not have enough pixels to resolve a certain angular separation between points in an image, no number of bits can fix this. Similarly, if you have two sharp peaks of sound pressure separated by less than the sampling period, the two will become blurred together: the temporal density of digital samples is then simply not enough to represent the two peaks distinctly and nothing you do with the bits can change this. Unless a different interpretation of minimal temporal separation is taken, it is completely fallacious to assert that a CD can resolve less than 5 microseconds when its individual samples are separated by periods of 23 microseconds. (Note that it is true that small alterations in temporal profiles can be indirectly encoded through variations in adjacent levels and that this is certainly aided by having more bits; however, a true translation in time of a temporal feature can only take place in quantized sample periods.)Just to give a clearer idea of how formal science and the (incredibly rigorous) scientific process is conducted, I thought I would explain what went into publishing the two above mentioned papers that have apparently generated controversy among lay readers (interestingly there has been no controversy whatsoever in all the professional circles, which include audiologists, otolaryngologists, acousticians, engineers, and physicists ). An experiment has to be carefully thought out and then submitted as a proposal to an Institutional Review Board (IRB) and approved by them before it can even begin. Then optimum equipment, methods, and a multitude of cross checks must be developed (my papers give some details to help appreciate what went in). It takes about half a year to conduct each sequence of controlled blind tests. Consent forms (legally approved and certified by the IRB) must be signed. The results, analysis, and conclusions are then carefully considered and discussed with colleagues who are experts in their related inter-disciplinary fields; for this I went in person to various universities and research institutes and met with people in departments of physics, engineering, psychology, neuroscience, music, communications sciences, physiology, and materials science. After that the results and conclusions were presented at conferences of the Acoustical Society of America (ASA), Association of Research in Otolaryngology (ARO), and American Physical Society (APS). Seminars were also made at numerous universities and research/industrial institutions (please see the list on my web site). After each presentation, the audience is free to tear apart the conclusions and ask all possible questions. Eminent people such as presidents of the above mentioned societies and corporations were present at my presentations and engaged in the discussions. After passing through this grueling oral presentation process, written manuscripts were then submitted to journals. There, anonymous referees are free to attack the submission in any way they want. More than a dozen referees and editors have been involved in this journal refereeing process. Only after everyone is satisfied with the accuracy of the results and all statements made in the manuscript, are the papers published in the journals. The entire process took around 5 years from initial concept to refereed publications. (Note that an article in a conference proceeding does not go through the rigorous refereeing process of a formal journal. Essentially anything submitted there gets accepted for publication. Contents of books are also not rigorously refereed. When possible, reference should always be made to an original journal article.)I would like to add some other observations: (1) One should be wary of drawing conclusions based on an “intuitive feelings” or because something “makes sense”. This has its role in adding plausibility to the understanding but can sometimes be contrary to fact. Thus qualitative statements based on survival and evolution cannot lead to a quantitative estimate of temporal resolution. One has to gain a detailed understanding of the physiology of the ear followed by all the neural processing steps in the ascending pathways of the brain. This knowledge can take years to acquire. (I give some references below for further reading.) On the other hand something that cannot be understood or explained (at the moment) isn’t necessarily false. It can be dangerous to dismiss claims just because they don’t make sense. Science should deal with properly authenticated facts. (2) Listening tests can be notoriously unreliable unless properly designed. This is why the proposal and consent forms for tests on human subjects have to approved by the IRB, otherwise no journal will consider an article for publication. The tests have not only to be blind but also must be absent of extraneous cues (such as the switching transients discussed in my papers). I would therefore be wary of informal listening tests conducted at home – these can be useful in helping you decide which component works better in your system but not rigorous enough to establish a scientific fact. (3) There is an erroneous statement in one of the forum posts “Such temporal resolution depends on the "coincidence detector" circuitry of the medial superior olive … mostly effective below 3kHz.” Actually the bipolar cells in the MSO (medial superior olive) encode relative delays between right and left ears which are used in azimuthal localization (left-right location determination). This has nothing whatsoever to do with the monaural temporal resolution being discussed. Coincidences between different frequencies arriving at each ear are encoded by octopus cells (which act like synchronous AND gates with a huge number of inputs) located in the PVCN (postero-ventral cochlear nucleus). This slew-rate information from the octopus cells then feeds bushy cells in the VNLLv (ventral subdivision of the ventral nucleus of the lateral leminiscus) which contributes to elements of pattern recognition.I hope this clarifies the meaning of temporal resolution in the context of sound reproduction systems. For further insight into psychoacoustics and the neurophysiology of hearing, I can recommend the following books: (1) “The psychology of hearing” by Brian Moore (2) “Integrative functions in the mammalian auditory pathways” by Oertel, Fay, and Popper(3) “Neuroscience” by Purves et al. I have personally met with and discussed my results with the authors of the first two books. All of these books are used as texts at universities. The last one is used in introductory neuroscience courses and is relatively easy to read.Sincerely,Milind Kunchur*********************************************************************************Milind N. Kunchur, Ph.D.Professor of PhysicsDepartment of Physics and Astronomy, University of South CarolinaWeb: http://www.physics.sc.edu/kunchur