Toward a Science of Consciousness 1998

M.C.Harrison nospam at
Fri Apr 24 18:57:39 EST 1998

Brian J Flanagan wrote:
> > >To the extent that that is true, computation is irrelevant to
> > >cognition.
> BJ: And you have determined this ... how?

Um, not an expert on this myself, but...

Isn't there a mathematical proof by someone that a syntactically derived
system is capable only of certain things, and is inevitably stumped by a
particular set of problems when confronted with them?

The Chinese room, thing.

You know, when an englishman is replying to chinese questions according
to a set of books that define how to answer, as each question is put
under the door, the englishman refers to a book which tells him the
answer to give back. Can this englishman be said to understand chinese? 

> > suggestions that a different (non-number crunching) computer
> > architecture might still be able to be conscious.  That's false.  If any
> > computer archictecture can do the job, all of them can, in principle.
> BJ: What principle are you invoking?

I'll take a stab at this. In principle, my elderly computer was
perfectly capable of producing the same answer to a given question as my
spanking new PII, except that it takes quite a lot longer to get to the

This is not a completely satisfactory answer, because win95 checks to
see what cpu I've got and won't run on an 8088, but the principle
appears reasonable.
> > And if a number-crunching computer can't do the job, then NO computer,
> > regardless of architecture can do the job.  Period.
> BJ: Wonderful finality, that--but perhaps it is only a question of what
> one means by 'computer'.

Ones which use syntax and serial processing seem to be poor candidates
for an AI computer. Imitating intelligence, maybe.

And if a rock can be sentient, it would be better to let the computer
decide for itself what to think, rather than putting in a program which
permits no thoughts except those written in stone and coerced using
error correction. Else, what you see is what the programmer told it to
say, not what it is saying. This changes but little if the program can
evolve, it's still a program rather than AI.

More information about the Neur-sci mailing list