Spreading Activation and Natural Language/Speech

Arthur T. Murray uj797 at victoria.tc.ca
Sat Apr 29 12:38:31 EST 2000


Artem Pyatakov, artemp at home.com, wrote on Sat, 29 Apr 2000:

AP:
> I admire what Arthur T. Murray is trying to do with the human
> mind and the world of AI.  However, I am still curious and have
> a couple of questions regarding spreading activation and natural
> language generation.  (For both Murray and others)
ATM:
Thank you.  The questions that you ask are very astute and they
go right to the heart of what we are trying to do in the programs
http://www.geocities.com/Athens/Agora/7256/mind4th.html Mind.Forth;
http://www.scn.org/~mentifex/mindrexx.html 26nov1994 Amiga Mind.Rexx;
http://www.virtualentity.com/mind/vb/ Mind.VB in Visual Basic.

Since the questions raised below deal with spreading activation,
I (Arthur) would like to state in advance that as an independent
scholar in artificial intelligence I arrived in May of 1979 at a
http://www.geocities.com/mentifex/theory5.html theory of mind
which happened to be based on spreading activation, but I was
not familiar with the term "spreading activation" as such until
ca. December 1992, when I came across a paper by Gary S. Dell,
"A spreading activation theory of retrieval in sentence production,"
published in 1986 in Psychological Review, Volume 93, pp. 283 et seq.

AP:
> Is there any way spreading activation itself can account for
> the grammar/syntax of language? In other words, could it be
> that the subject just has the highest activation "number" in
> an NP-VP sentence and that's why we put it first. (If so then
> how do other non NP-VP languages work?)
ATM:
I can imagine that, as human language evolved, there could easily
have been a time when no syntactic superstructure had yet evolved
and the functionality that you describe above could, yes, operate.

However, I personally feel that there is a pressing urgency for a
brain to evolve not only concept-fibers for all the parts of speech,
but a governing superstructure of syntax to keep track of each
part of speech as a class and to "flush out" a candidate word
from each part of speech in the correct sequence for a sentence.

As an aside, I would like to invite people to imagine how easy it
must have been for nerve fibers, originally carrying raw sensory
data into sensory memory channels, to "break away" from the tract
of data transmission fibers and become loose or "undedicated" fibers
that naturally fell into the much more sophisticated phenomenon of
holding an abstract semantic concept over a long (life-)span of time,
rather than merely a snapshot engram of a percept at a single time.

To more fully answer the previous question above -- a syntactic
superstructure really does help the mind both to "flush out" the
initial subject of an utterance, and to distinguish between
subjects and direct objects, because the very search for both
items is, I suspect, mediated by the re-activation of verbs.

As a further aside, Mind.Forth and its derivatives are still in
need of fine-tuning of the "spreadact" process, and I hope to
discipline myself to post less on Usenet and code more AI in
the near future -- but I am glad to discuss the AI on Usenet.

AP:
> OR do you always have to apply some kind of Chomskyan "structure"
> to the spreading activation that just supplies the ideas? If so,
> how would you pick the right Chomskyan structure? (For example:
> Would you ask a question or make a statement?)
ATM:
The Chomskyan superstructure is really quite essential for several
reasons.  (I think it distinguishes us from other animals that
almost but not quite use language.)  Something has to "harvest"
the most active lexical item among nouns, and verbs, etc., and
it might as well be an authoritative node on a Chomskyan tree.
Furthermore, the presence of a group of superstructures for, say,
English, predisposes a mind-design to include superstructures
for any (low) number of polyglot multilingual capabilities.

In other words, Structure plus Vocabulary equals One Language.
The vocabulary can be shared or at least overlap among languages,
but the language-specific superstructures permit the mind of
man or machine to think entirely "within" multiple languages.

As for how we would "pick the right Chomskyan structure,"
in computer programming it would be easy to establish that,
out of all structures present in the AI, the first structure
or the exclusively sole structure to get its nodes filled
with satisfactory "fillers" (i.e., chosen words) would be
the winning structure that would seize control of generation.
               
AP:   
> I am really curious to know about the questions above: thank you
> for your time.         
ATM:
And thank you, Artem Pyatakov, for asking these excellent questions
not by e-mail but right here on Usenet where the real experts can
join in.  (Are you listening, Dr. Noam Chomsky?  We are taking your
work and building artificial minds above, below and beside it.)
   
> Artem Pyatakov

Arthur T. Murray




More information about the Neur-sci mailing list