I congratulate you on your achieving an estimate of 30ms, inasmuch as
this is about the value that people at Haskins Lab (Connecticut) have
found critical for human speech-sound perception.
It's difficult to describe without graphics, but imagine audiograms
with 2 or 3 frequency bands changing across time: a "formant
transition" (i.e. way in which one of these bands changes frequency)
taking place across about 30-40ms is the basis for our discrimination
of "stop consonants" (e.g. p, t, k, etc.).
Essentially, they map the vocal apparatus moving from starting
position to the onset of the vowel sound, and ther target position is
prefigured from the start, so there is no such think as (e.g.) "P" in
isolation (when Haskins people presented these transitions in
isolation, they sounded like odd clicks or whateverr, nothing like
Paula Tallal has studied this in relationship to individual differences
in temporal resolution, for many years, and has related poor temporal
resolution to impaired language abilities underlying at least one form
of dyslexia. She can assess this ability by presenting clicks or pure
tones (or even visual stimuli) with very brief separations. Children
who need separations much longer than 40-50ms to tell the difference
between one and two clicks are at risk for dyslexia.
(It turns out that, contrary to prior suspicions about "poor sequencing
ability", their sequencing is OK if interals are long enough for them
to distinguish two events.)
A lit search using "au: Tallal, Paula" should put you in touch with
Incidentally, someone using a remedial training technique which grew
out of this research will be one of the speakers at the New York
Neuropsychology Group's 20th annual conference, May 8, at New York
University Med Ctr: "Neuropsychology and Treatment: After Testing,
F. Frank LeFever, Ph.D.
In <005801be7641$dc729080$091bfbd0 at default> rcb5 at MSN.COM ("Ron Blue")
>From: John Segrave <segravej at R_E_M_O_V_E.tcd.ie>
>To: audiolog at net.bio.net <audiolog at net.bio.net>
>Date: Wednesday, March 24, 1999 12:08 PM
>Subject: How do Humans Perceive Simultaneous Sounds?
>>>>I am a final year computer science student in Ireland. My research
>>is to develop a program that lets musicians play music together
>>computer network (not the internet unfortunately!).
>>>>One question I have been unable to answer is this (I hope this is
>>domain of audiology!):
>>What is the longest time delay you can have between two different
>>starting, such that the ear still thinks they started simultaneously?
>>>>For example: If a guitar player and a drummer are sitting in a room
>>they start playing a tune, then the ear should perceive them to be in
>>(even though there is a short time delay before the sound waves from
>>guitar reach the drummers ears, and vice-versa).
>>>>However, if the drummer and the guitar player were very far away from
>>other, that delay would be much longer. So how long would the delay
>>be before they can no longer play together? (because the delay is
>>up their ability to be in sync).
>>>>I have tried to perform some simple tests at home, and have arrived
>>rough figure of about 30 milliseconds, but I am no audiologist!! I
>>hoping someone in this newsgroup might know something about this
>>human hearing. Maybe someone could suggest a book that deals with
>>>>I have searched the web, but have found nothing so far. Any help you
>>offer would be very much appreciated.
>>segravej at tcd.ie>>>>>>>>>>>>>>>>>>>>>