Norm Cook's Topographic Callosal Inhibition?
park at netcom.com
Thu Dec 9 21:23:19 EST 1993
I have just enjoyed reading
Norman D. Cook, _The Brain Code. Mechanisms of Information Transfer
and the Role of the Corpus Callosum_, London and New York,
Methuen & Co., ISBN-N-0-416-40840-0 (1986).
and was quite impressed with Cook's seven-year-old theory.
What do people think of Cook's ideas these days? Have they been
refuted, ignored, considered obvious, or accepted as a breakthrough
and elaborated upon? Surely the jury is not still out?
------- SUMMARY -------
Very briefly, Cook proposed that an important function of the corpus
callosum is to activate, on the cortex of the right hemisphere, those
concepts that constitute the "context" of (i.e., the concepts most
closely-associated with) any concept that is currently activated on
the left hemisphere. His explanation posits the presence of several
other general neural mechanisms to make this work, such as (1) local
lateral inhibition, (2) a diffuse bilaterally symmetric activation
from the brainstem as an arousal and attention-focusing mechanism and
a source of activation for any neurons whose inhibitory inputs may
become suppressed, (3) a symmetric, mirror-image arrangement of
concept representations ("engrams") on each hemisphere, (4)
topographic connections through the corpus callosum (i.e.,
corresponding points are connected on each side), and (5) a "semantic
network" on the right side, in which semantic associations are encoded
as lateral inhibitory surround connections. Note that he doesn't posit
"labeled" arcs in this semantic net, such as the "is-a" or "part-of"
relations used in many artificial intelligence experiments with
semantic networks; just raw associations.
Among other predictions, Cook suggested that his model would help to
explain the extreme speed and accuracy with which people can
disambiguate language as they listen to speech or read: The right
hemisphere is working in parallel with the serial grammatical
processing that is going on in the left hemisphere. It helps the left
side (in some way Cook never explains) to assign the correct meaning
or part of speech to each word when it arrives by supplying the
context of the utterance to that point as implied by preceding words,
preceding sentences, the social situation, etc. The context guides
The following are some questions that occurred to me about Cook's
model of "brain coding," and some ideas I had about how to extend it
along the direction in which Cook started. Perhaps these ideas will
be old news to experts in the field by now.
1) Cook describes the activation pattern that the corpus collusum
would create on the right hemisphere as the "negative" of the pattern
on the left. It seems to me that a more correct description would be
that it is composed of the (internal and external) "boundaries" of the
pattern in the left-hemisphere. That is to say, I think Cook's brain
coding model is not a "semantic inverter," but a "semantic edge
extractor." This is most clearly shown in his Figure 4.1, although it
is intended to illustrate abnormal, not normal brain function. HIs
figure 3.10, the result of a computer simulation of his theory, is the
real "meat" of the book. But, unfortunately, he merely mentions it
without giving it the thorough discussion it deserves!
2) It seems to me that Cook must also require any two associated
concepts on a hemisphere to occupy physically close "cortical columns"
(or perhaps close clumps of columns) on the cortex. Only then could
associations be encoded via lateral inhibition, which acts only on
neighboring columns. That seems to place rather a strong restriction
on human thinking, such as a maximum number of associations with any
given concept, simply due to lack of space for more associated
concepts within the range of the lateral inhibition fibers. Or do I
misunderstand how lateral inhibition works in the brain?
3) Cook mentions several times in his book that he estimates that
there are two collosal neurons for each "cortical column" of 10-100
neurons (his "building block" of neural circuitry) in a hemishphere.
And his Figure 3.9 illustrating the organization of the brain model
implemented by his simulation programs shows one interhemispheric
connection in each direction between corresponding cortical columns in
each hemisphere. However, Cook never seems to explicitly state any
conclusions from that estimate, such as that the real coritcal columns
are connected in this way. Perhaps in later work?
4) Cook also hints at some nonverbal context extraction functions that
may take place during other sorts of activities, such as producing
"body language." He notes that the context extraction in those cases
need not take place on the right side, but might develop on one side
or the other for different types of context due to developmental or
biological factors. Can anyone please describe some of these other
kinds of context extraction in more detail?
5) Cook never seems to get around to explaining the mechanism by which
the right hemisphere aids the left.
If, as he suggests, callosal connections run in both directions
between the hemispheres and are both topgraphic and inhibitory,
wouldn't each context concept activated by "first-order" associations
with the initial concept on the right side then, in turn, inhibit
their corresponding concepts on the left side? I would think that
suppressing the concepts representing the context of a word or
sentence in the speech center would increase misunderstanding rather
than decrease it. On the other hand, maybe whatever gets activated in
the speech center gets spoken, so activating context concepts there
would only confuse the speech center about what it was supposed to
say. So suppressing them doesn't hurt.
Let's suppose that identical associations are encoded by lateral
inhibition paths in each hemisphere. Then suppressing the context
concepts on the left side would simply activate the original concept,
providing no additional information. I guess that would also
partially activate other concepts that share one or more of the same
concepts. Perhaps that explains some kinds of verbal mistakes (e.g.,
saying "cat" for "dog" because they are both pets).
So, this doesn't seem to be the answer to how the brain makes use of
context to IMPROVE its understanding of language.
6) If, however, we consider the cumulative effects of multiple back-
and-forth cycles of this sort, we can see how the right hemisphere
could provide useful help to the speech center's parser in the left
As an utterance is heard (or a sentence is read), each word activates
through the callosal connections a set of concepts on the right
hemisphere that represent possible contexts consistent with that word.
By the time most of the words in the sentence have been heard or read,
the correct context for the whole sentence will have been activated by
every significant word, while each erroneous context will have been
activated by only a few words (typically one). Therefore (with a slow
enough activation decay rate) the correct context will be activated
more strongly than the incorrect ones. Similarly, residual
activations from preceding sentences will also contribute more to
activation of the correct context for the current sentence--the
influence of discourse.
At each stage of sentence processing, the multiple contexts that have
been activated to different degrees on the right will be mapped back
across the corpus callosum to inhibit their corresponding copies on
the left side. More lateral inhibitions would therefore be suppressed
on the left side that go to contextually consistent interpretations
(meanings) for whatever the next word may turn out to be than to
inconsistent ones. This would, in turn, allow the diffuse activation
from the brain stem to activate (lets assume partially) all the
meanings/words that would be appropriate next in the current context.
In this indrect way, the right hemisphere could provide guidance to
parsing processes in the left by biasing the parser towards choosing
the correct meaning for the next word when grammar and syntax provide
In other words, context information from earlier words and sentences
would inhibit (on the left side) the "surrounds" of a set of possible
meanings for the next word (activating those meanings indirectly by
removing their lateral inhibitions) BEFORE that word is perceived.
Creating an expectation, as it were. We can often guess what word or
kind of word should come next in a sentence. In fact, the surprise of
hearing or reading something that does not fit our expectations can be
a source of humor.
At the same time, the context-free grammar and syntax knowledge in the
left side's speech center will be activating --directly in this case--
multiple possible meanings for the word currently being processed that
are consistent with grammar and syntax rules and word definitions.
The intersection of these two sets of meanings will be strongly
activated: They alone will receive not only less inhibition as a
result of parallel context extraction in the right hemisphere but also
more activation as a result of serial linguistic processing in the
left. If the left and right sides can "agree" on a meaning for the
word (i.e. if that intersection contains only a single meaning or
word), it will be accepted and the next word will processed in the
Too little or too much overlap between the two sets of meanings would
result in commonly-experienced understanding problems. If too little,
the word wouldn't make sense at that point in the sentence; if too
much, the meaning of the word would still be ambiguous (two or more
meanings could be activated equally strongly and more strongly than
the others). Still, the process is robust enough to continue in the
face of some ambiguity. We might expect, though, that conflicting
alternative sentence parsings and meaning assignments would lead to
more diffuse context activation patterns on the right side, leading to
weaker activations of expected meanings on the left for the next word,
producing more uncertainty about subsequent parsings. If a word
should appear later that narrows down the possible contexts and
meanings sufficiently, the ambiguity will disappear.
If a wrong parsing choice is made early in a sentence, and suddenly
becomes untenable as the rest of the sentence comes in, we have the
"garden path" sentence. The humor of puns and double-entendres also
depends on ambiguous parsings (which in the case of puns originate
from audible similarities between words). As the last words of a
sentence are read or heard, syntax constraints or an unexpected word
meaning may force a shift to what had been until then an unlikely,
weakly-activated context on the right side. That, in turn, could lead
to a reparsing of the whole sentence and sudden discovery of a
different overall meaning for the sentence or even a whole story.
This, of course, is what provides the surprise element in some jokes.
7) Free association might work like this: An initial concept on the
left side activates through suppression of their lateral inhibitions on
the right some associated concepts. They, in turn, by suppressing
lateral inhibitions back on the left side, activate several other
concepts to varying degrees, one of which is chosen and spoken. That
word is fed back (e.g. by hearing one's own speech) to become the
initial concept for the next cycle. The left side's memory of
preceding words and the rule of the free-association game, "don't
repeat your words," allow free association to wander through the
semantic net without getting into cycles very often. At each cycle,
the left side picks the word with the SECOND highest activation for
output (the word just spoken would naturally have the highest, but you
know you're not supposed to say it again).
8) This repeated mapping of concepts back and forth from one brain
hemisphere to the other --whether in reading, listening, or free
association-- remind me of certain artifical neural net architectures
developed to implement Grossberg and Carpenter's Adaptive Resonance
theory (ART-1, ART-2).
The question of the stability of such an iterative process is
interesting. So that the reader can investigate it, Cook gives a
MicroSoft BASIC program to simulate it and even to compute the
resulting artificial EEG signal produced by the simplified artificial
brain as it "thinks." Unfortunately, he doesn't show any results of
running this program, only refers to some earlier papers.
If such an iteration terminates, would it be because of accumulated
attenuation of the activation signals or because of semantic closure
of associations? One is reminded of certain dictionary studies
tracing circular definitions and definition clusters.
If it doesn't terminate, does that suggest a mechanism for the "monkey
chatter" of thoughts that Eastern meditation attempts to suppress?
I look forward to reading knowledgeable comments of readers of this
group on my rather lengthy layman's musings. Please include the
bionet.neuroscience newsgroup if you post follow-ups.
Grandpaw Bill's High Technology Consulting & Live Bait, Inc.
More information about the Neur-sci