Mind.Forth Programming Journal: 17 August 1999

Arthur Ed LeBouthillier apendragn at earthlink.net.nospam
Thu Aug 19 08:20:39 EST 1999


On Thu, 19 Aug 1999 01:32:39 -0400, "John Passaniti"
<jpass at rochester.rr.com> wrote:

>4.  Someone who with the ability to decode what it is your are trying
>to say manages to do what none of your text and ASCII diagrams have
>done so far-- clearly describe your AI methodology.  They'll use
>common terms instead of neologisms.  They'll document and describe
>giving the framework and theory of operation.  And people will credit
>*them* for the work, because they made it understandable.  Bravo for
>them.

As best I can tell, he's trying to build some sort of agent
architecture. Of course, Mr. Murry is not the originator of
the idea of agents nor is he the only one developing them.

I have not examined the full extent of Mr. Murry's work, but
I would agree with you that he is obfuscating the issue by using
arcane terminology to describe what it is that he is doing.
Additionally, he has made few to no claims of its capabilities
other than that it is "AI" and can represent that "horses like hay."

It appears that Mr. Murry's agent has some kind of knowledge
base allowing him to assert statements like "cats like milk."
That's good. World modeling capabilities are vital for an agent.

Although he has used the word "ontology" and made references to Cyc,
he has not described the nature of his ontology or its completeness
(or limitations). He hasn't described the basis for his knowledge
representation or any limitations inherent therein. He hasn't
described such aspects as temporal representation or other important
elements in a general purpose ontology.

He has yet to have demonstrated any kind of inferencing capabilities
or to have comprehensively described the components of said system
in the manner that such things are normally discussed (and
understood). He hasn't described the capabilities or limitations of
the reasoning system. Are there classes of knowledge that can't
be reached with the inferencing system? 

One component that appears missing to me, based on the little that
I have looked at his work is that it lacks some kind of teleological
(goal) representation. How does the system represent goals? How
does a programmer provide the goals? How does the system reason
about goals? Are there limitations on what can be inferred? How
efficient is the inference engine?

I wish him well with his work, but I think that if he used some of the
standard techniques and terminologies to describe his work, he 
would probably be better respected and his work might be taken 
more seriously.

One suggestion might be that he write a paper using standard
terminology that compares and contrasts the Mentifex agent
architecture against the structure and capabilities of other agent
architectures. That could be an interesting article; I know I'd like
to read it.

Sincerely,
Arthur Ed LeBouhillier






More information about the Neur-sci mailing list