I've been looking at the consequences of Walt Freeman's model of neural
system dynamics. His baseline model--the olfactory system--is
characterized by chaotic dynamics that a matching object pattern can pull
to a noisy limit cycle through a process of "winner take all." Continued
chaotic dynamics associated with an unmatched stimulus object eventually
result in an orientation response. The dynamics associated with no object
are also recognized as a "quiet" state. In the olfactory system, the
system is set to an initial state near a hyperbolic point at each breathe.
The net can be loaded with object data supplied by the cortex via the
input of synthetic sensory data via a secondary network.
Now, the chaos in Freeman's model involves the relative activation levels
of individual cells in multiple networks. In reality, those activation
levels are expressed by frequency modulation of underlying carrier
waveforms. Barry Richmond's work suggests those carrier waveforms to be
generated by characteristics of the individual sensory objects. In the
olfactory system, this phenomenom is not important, but in the auditory
system there is some evidence that the carrier waveform reflects the
narrow-band sound spectrum of the object in some manner. Spacial frequency
may be involved in keying visual objects.
So Freeman's chaos is seen in the time intervals between neural spikes.
Poincare' maps can be generated by mapping consecutive time intervals in
one spike train or corresponding time intervals in multiple spike trains.
If a spike train becomes (approximately) periodic, or if two become
synchronized, it is an indication that a recognizable object is "out
there." In some way, the keying carrier wave for that object is generated
by the subnet "recognizing" that object, and then the sensory data for
this object detection is used to frequency modulate that carrier wave.
(This indicates that the "downloading" process is complex, since it must
set up the subnet to generate the carrier wave in addition to setting up
the subnet to identify the pattern of sensory data of interest.)
Now to speech and hearing.
A possible model for the generation of "meaning" from sound goes as
1. Various sound streams are generated, each keyed by the narrow-band
spectrum of the source. Direction data are generated by time difference
between the ears and more complexly by the various paths available in the
pinnae. These are merged and used to localize various sound "objects,"
each of which is tracked independently and subconsciously.
2. One or more of these streams is presented to an array that operates to
"chunk" the stream into meaning. The array performs pattern-matching using
a variant on Freeman's olfactory system processing. If a given "chunk"
matches the appropriate pattern, the system is pulled to a noisy limit
cycle that has meaning to the downstream components of the cortex.
Some questions arise:
1. Is all of a language downloaded to this array?
2. Are there multiple initial states or just one? When is the system
initialized to that state?
3. Is the array single-level, or are there multiple levels of processing?
(I understand nouns are "understood" in different regions of the brain
4. If the array does not contain patterns for all of a language (or
multiple languages--I can mix three in a conversation), how are
words/patterns downloaded for recognition? Or is the active array a subset
of a much larger preinitialized array?
Note that the key information transmitted from module to module of the
brain are not individual activations or even pulse trains but collections
of pulse trains that together form "noisy" limit cycles.
Internet: erwin at trwacs.fp.trw.com