Technological Singularity

George Herbert gherbert at crl3.crl.com
Fri Jun 5 21:06:25 EST 1998


In article <6l95ke$o89 at hawk-hcsc.hcsc.com>, Joe Korty <jak at ccur.com> wrote:
>If human intelligence already squeezes out most of the benefit possible
>from intelligence, then Vinge's Singularity would not be possible.

This is probably logically unlikely; let's break this down into
two major areas of analysis: capability to make complex analysies,
conclusions, decisions etc (capability); and speed of making them.
Even assuming that capability is capped out at what humans can do
for some reason, there's no reason to assume that improvements
in speed are not possible, and that would have significant
advantages in any case.

>If human intelligence doesn't even begin to tap what is possible, then
>the superintelligences which would follow us would quickly escalate
>their intelligence to the intrinsic limit.

Maybe.  It is hard to predict things you do not understand and
cannot model yet.  We might create computer AIs of equivalent IQ
of say 300 which were for some structural reason never usefully
able to help us improve their own capability.  The key here
is that we really don't understand what might be coming.

>In either case, a plateau in technological progress would eventually be
>reached.  The only question is whether it's us or someone else enjoying
>the results.  In either case there is no Singularity .. just, at best, a
>period of rapid, accelerating growth until the limit is reached.

There's a lot about a possible Singularity that people keep insisting
that must be true, or must not be true, or some conbination thereof.
This is, fundamentally, completely missing the point.  [This is not to
blame you, I didn't get it that well until Darrell Long dragged Vinge
to UCSC for a lecture on it and most of us spent a couple of more hours
on the subject over beer afterwards...]

Vinge's primary point is that if the rate of advance of computing
power continues, we're going to reach the point where the
complexity and capability of systems exceeds that of any system
we have any practical or even good theoretical models for...
the human brain being the most complex example.  Once capability
exceeds our understanding, what will happen is unknowable
from where we are now.  The Singularity is primarily an
event bounding the end of predictable progress.  We could end
up with grey goo, a human/computer hive mind, AIs that ignore us,
AIs that rule us, AIs that help us, being wiped out, or we could
all dissapear into the 11th dimention.  Many things can be
speculated about events past this point, and likely events
during the period leading up to it, but there's going to be
a predictability horizon.  The Singularity is that horizon.


-george william herbert
gherbert at crl.com




More information about the Neur-sci mailing list