Technological Singularity

Bill Moyer billm at cygnus.com
Thu Jun 4 19:32:01 EST 1998


In article <goCd1.854$On1.3424263 at ptah.visi.com> seebs at plethora.net (Peter Seebach) writes:
>In article <357011e7.0 at news.victoria.tc.ca>,
>Arthur T. Murray <uj797 at victoria.tc.ca> wrote:
>>Engineers of Mitsubishi, of Daewoo, and of the Weng Zhen economy,
>>please make a humanistic and Chardinesque way to the Singularity.
>
>Can anyone figure out what this means?

  Arthur is referring to what Vernor Vinge dubbed the "Technological
Singularity", which can be generally described as the point in time
at which a technological innovation either renders mankind incapable
of controlling their environment, or at least exerts an irresistable
force on humanity.

  Vinge wrote an article about the TS .. my copy is at:
  http://www.shinma.org/ttk/vinge.html

  His article tends to run pretty strong in the rhetoric..  I've 
been meaning to write up my own take on the TS ever since I attended
one of his TS seminars a couple of years ago, but I haven't had the 
time.

  Some examples of scenarios which would constitute the Singularity:

  * The grey goo scenario -- microtechnology run amok, microscopic
    Van Neumann machines pulling everything apart to make more Van
    Neumann machines until the entire Earth's surface is converted,

  * The outbreak scenario -- genetically engineered bacteria or 
    virii getting loose into the environment and killing everyone,

  * The Homo Superior scenario -- creating genetically engineered 
    human beings with superior abilities against whom "normal" human
    kind cannot compete,

  * The Borg scenario -- mating human beings with cybernetic systems
    creating an elite class of humanity against whom "normal" human
    kind cannot compete (note -- it can be argued that this has 
    already happened to an extent; a college student without a home
    computer is at a disadvantage when competing against a college
    student with a home computer, and an engineer with access to a
    well-equipped workstation can max out any intelligence test),

  * The Frankenstein scenario -- creating a superintelligent AI 
    entity whose cognitive capabilities are as beyond ours as ours
    are beyond an animal's; this scenario usually assumes that the
    AI is capable of self-direction.

  ie, The TS is presumably catastrophic, either destroying humanity
or placing it under the domination of a higher power (maybe benign,
maybe not, either way taking humanity off the top of the food chain
for the first time in millenia).

  I gather that Arthur is referring to the Frankenstein scenario, 
and that he is asking that whoever makes the vital innovation would
do so in a way that assures that the superintelligent AI is benign
towards humanity.  I wish he'd done it in a more mature, less 
"cyberpunkish" way, though.  It deserves more serious thought than
most people give it.  Even if one doesn't believe in cataclysms on
the scale Vinge does, it is irrefutable that the advancement of 
technology will have enormous impact on our society.  Most of us 
are familiar with Moore's "Law" (more of a rule of thumb than a law) 
as it applies to computer technology, but less familiar with the 
similarly huge strides made in recent years in mechanical and bio-
logical technologies.  Even by very conservative estimates, it is 
likely that we will have the technology necessary for implementing 
any one of the scenarios I outlined here in no more than 20 years, 
possibly less.  (My own pet figure is 12 years, but that's more of 
a back-of-the-envelope number based on blind projection of Moore's
Law than anything supported with hard evidence.)

  The future might look back at our ideas of the TS and laugh, 
as we laugh at that fellow who observed that increases in farm
yields was linear and population growth is exponential and pre-
dicted massive famine in the moderate future.  Maybe not.  I hope 
it does, though, and the TS never manifests.  Maybe if Bill Gates
takes over the world and all computers run Microsoft software, no
computer in existence will be stable enough to support an AI for 
very long.  :-)

  -- Bill Moyer




More information about the Neur-sci mailing list