I'm exploring the possibility that speech may be a Pecora-Carroll process
that synchronizes two perturbed quasi-periodic processes, one serving to
select the utterance (at the deep grammar level) and the other serving to
track the utterance.
A Pecora-Carroll process is used to synchronize chaotic processes. The
dynamics of the chaotic process are decomposed into two components, one
transmitted between the two chaotic processes and the other duplicated at
the individual chaotic processes. It is necessary and sufficient for
synchronization that the lyapunov exponents for the variables in the
local components be negative. (This means that the variables in the
transmitted component must contain all periodic and chaotic variables
in the dynamics.) The originators of this concept are Lou Pecora and Tom
Carroll at NRL. Their papers have appeared in a number of IEEE journals.
Steve Barry suggests some questions:
1. Has there been anything yet done localizing deep grammar processing,
and is the region the same for speech and hearing?
2. In studies of shadow speech, what are the limits to the time delay
between hearing the utterance and shadowing it? How about a paraphrase (or
simultaneous translation) of the utterance?
If speech is a P-C process, the driven process in the listener should have
negative conditional lyapunov exponents. This could be explored using
methods similar to those of Walt Freeman.
It would be interesting to put together a dynamic, multi-level model of
speech and hearing to see if a test model compatible with a quasi-periodic
implementation is feasible.
Internet: erwin at trwacs.fp.trw.com