Mind.Forth Programming Journal: 17 August 1999

Arthur T. Murray uj797 at victoria.tc.ca
Tue Aug 17 21:51:20 EST 1999


CODING THE FINAL ALGORITHM

First we pretty up a few details such as replacing Amiga instances
of "ENDIF" with FPC-style THEN.

We would like to code a new SPREADACT in advance of HOLODYNE (as
required by Forth), but the name SPREADACT is already in use --
even if we plan to rip out the old code quite soon.  Therefore let
us code TIME-CLOUD.

OK, we have moved TABULARASA, IDEADAMP and FIBERDAMP from Amiga
Screen #6 up into Screen #5, and into Screen #6 we have stubbed
TIME-CLOUD.

Screen #9 HOLODYNE seems to differentiate between what to do
during the "attention" phase and what to do during the "reentry"
phase.

Now in the "attention" phase part of HOLODYNE we have inserted a
call to TIME-CLOUD, but somewhere we need to find the "pre" and
"seq" tags that will direct us as to where to spread the activation.

Wow!  I trapped "pre" and "seq" in HOLODYNE; then I displayed them
in TIME-CLOUD.  During the ensuing test-run, it is most impressive
to see the AI declaring the syntactically related concepts, because
you feel that you are witnessing the internal process of association
in the artificial mind.

We do not want to set up run-away spirals of spreading activation.
A one-off spread is sufficient, and may still become part of a
wider, more extended spiral.

Aw, gee, about a paragraph ago I started thinking about posting
these deliberations to Usenet.  I could get away with such a post,
because many people would think, "More nonsense from Mentifex,"
and quickly redirect their attention, while the less intelligence-
quotiently challenged might find it rather fascinating to see how
this coding of the AI mind proceeds.

I really am on the home stretch to AI, you see.  The July 1999
releases of
http://www.geocities.com/Athens/Agora/7256/m-forth.html Amiga and
http://www.geocities.com/Athens/Agora/7256/mind-fpc.html IBM-clone
Mind.Forth-28 showed the various mechanisms of the artificial mind,
but I was still casting about for the proper sequences of the
activations of the concepts.

Then on Wed.11.Aug.1999 I did some review of 26.Nov.1994 Mind.rexx
and some theorizing on paper, and suddenly I saw very clearly how
associations would need to spread across the mindgrid.  The new
clarity meant that I would have a very definite algorithm,
expressible in many different programming languages, and now I am
in the midst of coding that final algorithm that should result in
an AI that most observers would regard as machine intelligence.

Now let's get back to coding.  TIME-CLOUD may need to imitate
HOLODYNE, because in Forth I do not see how HOLODYNE could call
itself, especially in a useful and non-spiraling fashion.

There!  It took me about five lines of Forth code in Screen #6
TIME-CLOUD to find and identify the associated concepts, make
sure they were non-zero, and increase their activation by the
arbitrary amount of twenty-five (25) units, which is above the
twenty-unit level that I will probably use as a threshold for
inclusion in a thought, and below the thirty-unit level that I
am already using in HOLODYNE.  Now let's run Mind.Forth and see
whether all of Hades' breaks loose.

     "cold"  "1 load"  [ "loading..." ]  "MIND"

Huh?  What an anticlimax!  I can't tell if it's thinking or not.
I broke into its infinite loop and as the User I typed in,
"Horses eat hay."  Not yet being put to the test, Mind.Forth made
some random response that I can't even remember.  For variety, I
broke in and typed, "Cats like fish."  Once again Mind.Forth
showed its single-digit IQ.

Hey, I had better stop right now and save this version, because
it may be the first thinking version.  It is now 6:15 p.m., and
it had better be worth my giving up of the usual practice of
watching poor weltmuede Peter Jennings on ABC News.  OK --
possible working AI saved to disk.

Having entered two facts into the knowledge base (KB) of the
Forthmind, I then played not God but Underwriters' Laboratory
and I subjected Mind.Forth to a grueling test with the folowing
one-liner:  "Horses eat what?"

Also sprach Mind.Forth:  "HORSES EAT HAY"

But I haven't even ripped out the convoluted, if-in-doubt-fake-it
"mindcore pathways" code from July of 1999, and I was half expecting
that "junk-DNA" code to interfere with the AI.  Also, I should
perhaps not have halted the program before asking it what cats
like to eat.  Oh well, at least I can inspect the three arrays now.

Hmm.  The AI responses that I could not remember a moment ago,
are actually recorded in the mindcore "psi" array.  After correctly
answering, "HORSES EAT HAY," Mind.Forth falsely went on to declare,
"I EAT FISH."  The "uk" and "ear" arrays show correct functioning,
but no fresh insights.  Perhaps the spurious statements will
disappear when threshold activation levels are enforced:  "If you
want to be a valid thought, you have to be a strong association,
not a weak one."

Mind.Forth as a Cyc-style KB will have an even better grip on the
world -- or ontology -- when roboticists introduce parallel sensory
input channels beyond the currently isolated auditory memory channel.

Today http://www.scn.org/~mentifex/aisource.html Mind.Forth has
made one small step for a program but one giant step for robot-kind.



More information about the Neur-sci mailing list