Dr. F. Frank LeFever on Fri.31.jul.1998 writes: [...]
In <35c077c6.0 at news.victoria.tc.ca> mentifex at scn.org (Mentifex) writes:
>>Stephen Wood <swood at papyrus.mhri.edu.au> on 30.jul.1998 from the
>>http://www.mhri.edu.au Mental Health Reseach Institute writes:
[...] Mentifex/Arthur responds to Stephen Wood:
>> The theory here appeals to logic. I do not know know which of
>> your above cited varieties of neurons fits the bill for a long
>> fiber holding a concept. Nevertheless, it is in the nature of
>> a neuron to have a long axon with potentially up to 10K synapses.
>- - - - - - - -(snip) - - - - - - - - - - - - -
>> For example, the main departure point for this theory of mind is
>> the idea that logic dictates several things:
>> 1. Sensory perception MUST feed into linear memory channels.
>--I don't know whether it is the logic or the language that's
>--lacking here. Perhaps I don't know what you mean by "linear
>--memory channels". What's linear, the channel or the memory??
In theorizing about the brain-mind, we must introduce here
two important notions: logical equivalency, and modularity.
By "linear memory channels" I mean straightforwardly a linear,
chronologically continuous sequence of memory engrams laid down
in a series over a lifetime of experience. Now, before everybody
on the Internet objects with shouts of "floating memories" or
Pribramesque "holographic memory" or what-have-you, I explain
here that we will try to force ANY memory model into its
LOGICAL EQUIVALENT of a series of records of sensory experience.
Therefore, to answer Dr. LeFever, the channel is linear.
We also treat each sensory memory channel as a module of mind.
(If we get one channel wrong, it does not destroy the edifice.)
Dr. LeFever rightfully complained in recent posts that he could
not see any vestige of Hubel and Wiesel (feature extraction) in
the overarching Mentifex mind diagrams, but that lack of detail
was due simply to the depiction of vision as one entire module.
>--What we know of sensory perception is that the channels are
>--divergent, parallel, and only slghtly redundant (i.e. the
>--channels carry different unique information based on the same
>--sensory input, en route to the hippocampus. Increasingly
>-"permanent" memory seems to be based on a return of this
>--information, in some form, to the region just traversed by these
>--inputs; whether to precisely the same fibers is not known yet.
>--Truth is stranger than fiction--sorry, I mean "logic".
>> 2. There is obviously intermodal communication among the channels.
>--Not necesarily directly between channels. Would you accept "via
>--structures such as hippocampus and/or amygdala"?
Certainly, because I will accept all empirical findings.
>> 3. Concepts MUST reside elsewhere than in the sensory channels.
>--Definition of "concept"? Not obvious why they MUST reside
>--elsewhere. Some current work suggests reactivation of the same
>--cells involved in original percepts.
In order to discuss the concept of concept, we need a diagram:
/^^^^^^^^^^^\ Brain-Mind and Robot Architecture /^^^^^^^^^^^\
/visual memory\ ________ semantic / auditory \
| /--------|-------\ / syntax \ memory |episodic memory|
| | recog-|nition | \________/------------|------------\ |
| ___|___ | | |flush-vector | ______ | |
| /image \ | ___V______V_ word-fetch | / \ | |
| / percept \---|---/ library of \--------------|--/ stored \| |
| \ engrams / | \ concepts / for thinking | \ words / |
| \_______/ | \__________/ in language | \______/ |
A nerve fiber can be either ON or OFF, and, yes, it can vary its
rate of firing. When a concept nerve fiver is ON, it is activating
its embodied concept by semi-activating all the associative tags
which collectively constitute that concept. Please notice a
major difference (and all objections are welcome) between a fiber
holding a concept, and a fiber holding engrams in a sensory
memory channel. Any synaptic node on a sensory memory fiber
records a particular element of a particular memory of a
particular millisecond of experience. On a concept fiber,
however, all synaptic nodes are logically equivalent. Concepts
are logically punctiform, but it would be physically impossible
for a punctiform concept cell to relate to the rest of the brain.
Most conveniently for the evolution of mind, neurons have long
axons and a tree-structure of dendrites, so that a logically
punctiform concept fiber can interact via associative tag with
hundreds if not thousands of fixed sensory engrams, and with
other concepts, and with linguistic control-hierarchies.
>> 4. THEREFORE, concepts must lie EITHER in the neurons linking
>> the separate sensory memory channels, OR in neurons MEDIATING
>> the linkage of sensory modalities. The second choice prevails.
>--Not clear what you mean by difference between neurons linking
>--them and neurons mediating the linkage. In either case, it is
>--a logical error to say the concepts "lie" or "reside" IN them.
"Linking" is a straight connection, and "mediating" is any form
of processing of information astride the links among channels.
>- - - - - - - -(snip) - - - - - - - - - - -
>> People think across a whole range of modalities; we are concerned
>> here with the generation of a COMMUNICABLE thought. If you are
>> attending a lecture and formulating in your mind a question to ask,
>> your auditory memory channel HEARS each surface-structure version
>> of the sentence that you are generating in the form of a question.
>> The auditory memory channel is your only SELF-PERCEIVING channel:
>> whatever we think verbally, we also experience verbally -- in a
>> creative loop of initial formulation and subsequent refinement.
>--Well, here we have the root of the problem: as with centuries
>--of fruitless speculation before development of scientific inquiry,
>--you have elevated your INTROSPECTION (not even an honest naive
>--introspection, but one which the Introspectionists would have said
>--involved The Stimulus Error, influenced by whatever your readings
>--led you to expect) to the level of a Self Obvious Truth.
The Mentifex mind-model is not an INTROSPECTIVE theory, since
it goes into too much detail, far beyond the limits of intro-
spection. Instead, the model is a black-box approach to the
brain-mind: What buildable mechanisms would produce the same
observable behavior of thought and language in a thinking AI?
Arthur T. Murray (mentifex at scn.org) http://www.scn.org/~mentifex