Comments for Mind.forth AI BOOTSTRAP Scr #16

Mentifex mentifex at scn.org
Mon Dec 14 10:31:43 EST 1998


Jonathan Altfeld "jonathan at altfeld.com" in alt.psychology.nlp wrote:

> Hello, Mentifex.  Your posts on Mind.forth are interesting.
> I went to one of the source code sites and looked around;
> In particular I found this one page of noted interest...

> http://www.geocities.com/Athens/Agora/7256/mindmap.html

> In this flowchart for Mind.rexx AI (30 Nov 94), there is
> mention of an as-yet uncoded function call Metempsychosis() .
> The specification of the function indicates its purpose is
> to reload saved states of mind.

Mentifex/ATM-
Yes, you are right.  And the Greek root of "metempsychosis"
is that the "psyche" metastasizes (like cancer!), moving
in (Greek "en" or "em") to (a) new place(s) from an origin.
In other words (now I will disgust scientists): soul travel.
But it's the Greek word I love, not the idea of soul travel.
(In January 1995, when I spammed several dozen writers at WIRED
Magazine with that Mindmap diagram, the uncoded Metempsychosis
function infuriated one of the self-righteous writers there.)

> If I'm properly interpreting what is meant by that description,
> then this spec parallels the NLP(tm) processes known as
>"setting & firing anchors," or simply put, "anchoring."  People
> reading alt.psychology.nlp do know something about that process.

Mentifex/ATM-  But I must confess ignorance about those terms.

> *  This function call was in Mind.rexx (Amiga only), not Mind.forth.
> *  Has Metempsychosis() been adopted into the Mind.forth model?
>    If so, has it been coded?

Mentifex/ATM-
The function of Metempsychosis() has not yet been coded either
in Amiga Mind.rexx or in Amiga Mind.forth because it must
necessarily be an add-on function upon completion of the basic AI.

The appeal of the Metempsychosis() function is that, like a meme,
an AI mind could replicate itself anywhere and everywhere.
Especially with a language like TeleScript from General Magic,
a conscious AI could send its source code anywhere in the world,
then also send its experience files, and thus reconstitute itself
not only in one remote place but in thousands of places all at once.

> I don't know the Forth language, & I wouldn't want to code
> the function, but I would contribute to a discussion of the
> processes involved in anchoring, if it would help someone
> working on/in the Mind.forth project to code Metempsychosis().

Mentifex/ATM-
Thank you for volunteering.  I am trying to bring Mind.forth AI
to the point where other people, especially the robot-makers and
the *real* programmers (as opposed to myself the amateur), will
step in and show us all the real way to code such a thing, not
only in Forth or REXX but in whatever language appeals to them.

> The Mind.forth project assumes a very small memory footprint
> if I read correctly, so you're dealing with a grossly simplified
> AI Mind.  It might be difficult to decide what to exclude from
> the basic process of anchoring.  Unless the project has advanced
> since the present specs were published, then you're still using
> only 1 primary sensory representational system (auditory), so
> that limits the manner in which anchors can be set/fired within
> the Mind.forth model, fortunately.

Yes, the model at present uses only audition because it is indeed
a *linguistic* mind-model.  Consequently, there is a danger that
the AI mind will make rational statements but not know what it is
talking about, in an experiential sense.  I do hope that machine-
vision specialists will step in and attach eyesight to Mind.forth
or its Hans-Moravecian mind children.  As for a possible nose{ }
channel, there are already companies marketing olfactory chips.

> The reloading of saved states of mind presupposes the initial
> saving of various states of mind.  Is this feature coded?

The 26nov1994 Amiga Mind.rexx did create savable output files
of what in current Mind.forth are the three levels of mindgrid:

mindcore array psi{ }   English array uk{ }   auditory array ear{ }

Here I (Arthur) must confess, though, I have not yet figured out
how to create and save Mind.forth output files in the simple Forth
of Amiga Library Disk (Fred Fish) #977 Mountain View Press Forth.
Also I am hurrying to code the AI and I am not trying to save output.

> If not, then what triggers could cause the AI Mind to save
> various states of mind, or, what reasons would the AI Mind.forth
> code have to save present state?

Every imaginable trigger would cause the saving of states of mind.
First, let's make sure that we are talking about the same thing.
I am not talking about moods or attitudes or states of grace.
Rather, the software deals with the actual contents of the arrays
which gradually record the sensory and internal experience of the
AI mind as it interacts with the external world and as it reflects
internally.  I will probably get flamed, but to me it is axiomatic
that, in a computer program, all data can be caught, intercepted,
and saved so as to enable the re-creation of an identical program-
state not only at a later time but at other physical locations,
even multiple such locations, so that an expert AI mind-state
could be re-created at a thousand different points of light, George.
[Remember US Pres. George Bush and his "thousand points of light"?]

On a mundane level, it would be good to save outputs and array
contents for diagnostic purposes, but right now I am simply
displaying such data on-screen in the course of EXAMINE Scr #17.

> Shifting gears here, I noticed an interesting flow shown in
> the flowchart (at the URL above) was the piping of externally-
> spoken words back into the auditory channel, so as to allow
> the brain to hear itself think.  That's thorough.  I wonder
> though, if in a simple AI Mind, without the sensory input
> Deletion, Distortion, & Generalization processes described
> through NLP(tm)... might not this cyclic flow of spoken words
> back into the auditory input channel be a completely identical
> reflection of internal deep structure?  (i.e., redundant or
> potentially unnecessary)?

The cyclic re-entry of spoken (more precisely, *thought*) words
back into the auditory input channel is designed to be the first
time that you even *hear* the verbal thoughts forming in your mind.
And these verbal thoughts haven't even left the auditory channel;
they arise *within* the auditory memory channel and proceed to
flood the auditory channel, thus flowing from their manifold points
of engram/origin within the auditory memory channel down to the most
recent point (the advancing front of consciousness) where new engrams
of auditory memory are being deposited and tagged associatively.

>                            I didn't explore how you map phonemes
> to deep structure concepts in the Mind.forth code, so it may be
> that the flowchart loop I mention is not redundant.

Mentifex/ATM-
In order to avoid the difficult task of using actual phonemes
(but anyone else is welcome to try), in Mind.forth I am using
English spellings of words and pretending that each string of
characters is a string of phonemes.  Since we still have a coded
symbol, whether it is made up of letters or of phonemes, we still
get recognition of those words/symbols and thus we achieve the
desired activation of the underlying concepts in semantic memory.

> Regards,

> Jonathan Altfeld   (jonathan at altfeld.com)  (877) LIVE-NLP Phone
> Mastery  InSight  Institute  of   NLP(tm)  (813) 960-8999 Phone
> http://www.altfeld.com/mastery/index.html  (813) 960-9852 Fax



More information about the Neur-sci mailing list