Toward a Science of Consciousness 1998

Jim Balter jqb at sandpiper.com
Sat Apr 25 16:12:06 EST 1998


M.C.Harrison wrote:
> 
> Brian J Flanagan wrote:
> >
> > > >To the extent that that is true, computation is irrelevant to
> > > >cognition.
> > BJ: And you have determined this ... how?
> 
> Um, not an expert on this myself, but...
> 
> Isn't there a mathematical proof by someone that a syntactically derived
> system is capable only of certain things, and is inevitably stumped by a
> particular set of problems when confronted with them?
> 
> The Chinese room, thing.
> 
> You know, when an englishman is replying to chinese questions according
> to a set of books that define how to answer, as each question is put
> under the door, the englishman refers to a book which tells him the
> answer to give back. Can this englishman be said to understand chinese?

Would it matter, in terms of how you remember or write about this
in the future, if I were to point out that "someone" (Godel)
proved something quite different from your description,  and that
this is a quite different matter from Searle's Chinese Room?
I.e., your lack of expertise is in fact manifested by being wrong
on every significant point?

Note that the claim that the English speaker (Searle is not an
"englishman") does not understand Chinese is not something that
is supposed to *follow* from some proof, but rather is a claim Searle
makes that he expects his readers to agree with, and most do, even if
only for the sake of argument.  What Searle holds *follows* from the
claim that the English speaker does not understand Chinese, is that
a computer cannot understand Chinese merely by virtue of following
instructions from a book.  (The standard "systems response" to this
is that the Chinese Room, as a system, might "understand" Chinese,
whatever that unformalized notion might mean, even if the English
speaker does not.)  Searle does not make any claim that the computer
or Chinese Room is "capable only of certain things"; his position is that, even though the computer's *behavior* is identical to a human's,
and thus is not stumped by anything a human would not be stumped by,
the computer nonetheless lacks understanding.  This is quite different
from Roger Penrose's position -- he does think that something is lacking, and that Godel's theorems can be used to show this, but his
argument is anything but a "mathematical proof", and his work has
been strongly and ably refuted by numerous respected theorists, including Penrose's own mathematical tutor, Solomon Feferman (although
these refutations seem to have little influence on what people *believe* about what has or has not been shown).

> > > suggestions that a different (non-number crunching) computer
> > > architecture might still be able to be conscious.  That's false.  If any
> > > computer archictecture can do the job, all of them can, in principle.
> > BJ: What principle are you invoking?
> 
> I'll take a stab at this. In principle, my elderly computer was
> perfectly capable of producing the same answer to a given question as my
> spanking new PII, except that it takes quite a lot longer to get to the
> answer.

There are at least two problems with this: one is that many problems
are defined in terms of real-world time constraints; a machine that is
too slow to respond before the next clock tick will follow a different
execution path (like printing "interrupt timeout" and stopping).
While this may be of no interest to certain sorts of isolated mathematical theorists, it is of critical importance to the sciences
of computation, cognition, and everything else of any significance to
the topic at hand.  Another problem is that no real world machine is
computationally equivalent to a universal computing machine, due to
their finite memory resources.  These finite resources put severe limits
on the classes of problems they can solve.

> This is not a completely satisfactory answer, because win95 checks to
> see what cpu I've got and won't run on an 8088,

That's a bit like saying that my Chinese book checks my nationality
and and won't let an American read it -- that's not quite how it works.

> but the principle
> appears reasonable.

The principle Bill Modlin invoked was the Church-Turing thesis.
The principle you invoked was that your old Von Neumann machine
was "in principle" as powerful as your new Von Neumann machine,
only slower.  With such a standard of reason, anything is likely to
appear reasonable.

> > > And if a number-crunching computer can't do the job, then NO computer,
> > > regardless of architecture can do the job.  Period.
> > BJ: Wonderful finality, that--but perhaps it is only a question of what
> > one means by 'computer'.
> 
> Ones which use syntax and serial processing seem to be poor candidates
> for an AI computer. Imitating intelligence, maybe.
> 
> And if a rock can be sentient, it would be better to let the computer
> decide for itself what to think, rather than putting in a program which
> permits no thoughts except those written in stone and coerced using
> error correction. Else, what you see is what the programmer told it to
> say, not what it is saying. This changes but little if the program can
> evolve, it's still a program rather than AI.

It might be well to factor your own lack of expertise into your faith in your own analysis.

--
<J Q B>



More information about the Neur-sci mailing list