Speech and Chaos

Tom Holroyd tomh at BAMBI.CCS.FAU.EDU
Mon Feb 1 10:27:50 EST 1993


You should try modelling delayed auditory feedback (DAF).  The typical
experiment with a human is for the human to speak into a mike, and the
speech is delayed 200 to 250 ms or so, and played back into headphones
the speaker is wearing.  It is a very robust phenomenon that the speaker's
speech is disrupted.  There is an optimal time (around 200-250 ms) where
it is almost impossible to speak - the auditory feedback interferes with
the production of speech in a significant way.  Speech *can* proceed
normally with no auditory feedback, much like a deafferented arm can
move to the correct target (i.e. only feedforward processing).  But
normally there is feedback with an appropriate (short) delay.  DAF
experiments show that this feedback plays a role in speech production.
When there is no auditory feedback, the speaker undoubtedly uses an
anticipatory model to predict the feedback and does just fine.  But
the real DAF screws the production process up.

This is not so deep as grammar, but one can certainly view speech
communication as: the speaker's speech signal acts as a perturbation
on the listener's dynamical system.  This view is supported in part by
the relative lack of information in the speech signal itself.  Speech
can be digitized down to 1 bit at 16 KHz and still be recognized and
understood.  Noise can be added etc.  Much of the "information" transmitted
is already in the listener's head, and is simply activated by the
speech signal.

So the question would be, how does a P-C process react when the output
is coupled back to the input with a delay?

Tom Holroyd
Center for Complex Systems and Brain Sciences
Florida Atlantic University, Boca Raton, FL 33431 USA
tomh at bambi.ccs.fau.edu





More information about the Neur-sci mailing list