Robotic Prosperity Engine

Arthur Murray mentifex at scn.org
Tue Apr 21 19:07:36 EST 1998


mentifex at scn.org (Arthur T. Murray) here in response to Andrew Lias:

Sorry, Andrew, I have to rush to a meeting with a psychiatrist, so
I can only comment briefly down below (see below).  He has no
understanding at all for my AI project (Project Mentifex), and he
says that I am a "dilettante" and that I ought to "orchestrate"
my life instead of doing AI as an independent scholar.  Ah, well,
please see below....

> You state

>>  This idea is obvious and unoriginal, but within Project Mentifex
>>  it is extremely important to declare from the outset now in 1998    
>>  that we intend to introduce a Cybernetic Economy both with the
>>  avowed intention of benefiting "the least of our brethren" and 
>>  with the acknowledged risk that some out-of-control corporation
>>  (Microsoft?) or the robots themselves may spurn the disadvantaged.

> I am glad that you are acknowledging the risk, but that still leaves
> unanswered the question of how you intend to address it.  I went to the
                                               ^^^^^^^^^^
> web sites you references, backtracking to the main Mentifexpage, but I
> couldn't find any links that discussed this (there were, of course, a
> whole *lot* of links, as you know).  Could you please provide a specific
> reference?

The problem is NOT ADDRESSABLE, because Vernor Vinge in his Whole
Earth Review article on "The Singularity" (q.v.) has pointed out
VERY convincingly that the things we are discussing here are
UNAVOIDABLE -- we cannot stop them.  Therefore, as "Project Mentifex"
I wish to go on record as saying that in 1998 I was hoping for a
genteel, charitable, humanistic transition to the Cybernetic Economy.


> Too often I see new and potential technologies treated as panaceas
without
> any due regard being given to the Law of Unintended Consequences.
            a beautiful reference!  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  By the
> nature of the law, it's impossible to foresee every possible
contingency,
> but I would like to know whether, at the least, all of the visible and
> pertinent concerns are being addressed.  Since we cannot take the
presumed
> benevolence of this hypothetical hyper-sentients as a given, and since
> such a race of beings would be able to think circles around us, I would
be
> very hesitant to put our future into their hands without considerable
> thought being given to preventative measures.

Bye for now.  Gotta go meet the shrink who does not believe that
a classics scholar like me (BA 1968) can creat artificial minds.
(And he's my own father!)

> -- 
> Andrew Lias | [21]anrwlias at wco.com| [22]andrew.lias at lamrc.com| Siste
viator



More information about the Neur-sci mailing list