Knowledge Base Or Bust

Arthur T. Murray uj797 at victoria.tc.ca
Sun Dec 16 12:30:56 EST 2001

"Owen Nieuwenhuyse" <nieuweo at ezysurf.co.nz> wrote on 16.DEC.2001:
> Arthur T. Murray <uj797 at victoria.tc.ca> wrote in message [...]

>> Human:  cats eat
>> Robot:  CATS EAT BUGS
>>         10   41  10
>> The AI gives the wrong answer in the fourth exchange because the
>> concept of "bugs" has retained too high an activation.  [...]

> ON:
> What do you use these "activation levels" for?
The activation-levels determine which concepts -- by dint of high
activation -- will be included in a sentence of thought generated
by the interaction of Chomskyan syntax and a conceptual mindcore.

Albert van der Horst, in a rather witty post available on-line at
http://www.mailgate.org/comp/comp.lang.forth/msg18815.html (q.v.)
intimates that there has not been much improvement over the years
in the Mind.Forth sample dialogs (and the AI remains a Forth AI),
but actually during the outgoing HAL Memorial Year of 2001, a lot
of progress has been made in tightening down the AI algorithm of
http://mind.sourceforge.net/spredact.html "Spreading Activation".

There one sees a diagram of how activation-levels are supposed to
work as conceptual associations ripple across the artificial Mind.

As my slow and feeble mind zeroes in on the solution to the daunt-
ing problem, I take heart by realizing that most of the mindgrid
may consist of passively inert memory traces governed by a small
superstructure of active logic that flushes or impels spikes of
activation through the associative memory in search of the most
logical or entrenched or emotionally appealing chains of thought.

> Do you have a simple explanation of how they relate
> to axiom processing?
What you mean by "axiom processing" is not clear to me but I will
assume that you are inquiring about the ability of the AI Mind to
reason logically, as for example with syllogisms.  Unfortunately,
more complex grammar structures, such as negation and the use of
logically meaningful adjectives like "all" or "no" must first be
introduced before the AI Mind will be able to reason logically.

The work must proceed in stages, after the initial knowledge base
works reliably.  My posting of these progress reports, of course,
allows any interested programmer to set up a separate website and
to display independent work on the same public-domain-AI problems.

> Do you have a detailed Q/A dialog structure?
Currently the AI Mind is only able to parse three-word sentences
of a sentence-verb-object (SVO) structure, and therefore the only
way to ask the AI a question is to enter a statement of three or
fewer words, such as "cats eat what" or "cats eat" [RETURN].

Athough any upcoming syntactic enhancements must be hand-coded
into the A.D. 2001 newborn AI, spiral learning structures should
eventually be able to learn any human language as a baby learns:
by adding and deleting nodes of syntactic elements on a spiral.
A loop or circle is too closed to learn anything, but a spiral
is a loop carried forward over time and thus capable of change.

Arthur T. Murray, http://www.scn.org/~mentifex/
http://www.nanomagazine.com/nanomagazine/01_10_24 - October 2001
http://www.frontiernet.net/~wcowart/murray_article "Building AI"
http://mind.sourceforge.net/theology.html - "The Theology of AI"

More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net