Toward a Science of Consciousness 1998

Jim Balter jqb at
Mon Apr 27 19:41:20 EST 1998

Wim Van Dijck wrote:
> On Sat, 25 Apr 1998 14:22:21 -0700, Jim Balter <jqb at>
> wrote:
> >Wim Van Dijck wrote:
> >> I once heard a quite strond argument during some introductory AI
> >> classes: computer hardware (neural nets not included) work in
> >> algoritms. Conscious minds, such as ours, use procedures (or whatever
> >> you want to call it) that are not algoritm based. Computers CAN only
> >> use algoritms (at least nowadays) so based on this principle, a
> >> computer will never gain consciousness, no matter how big or fast it
> >> is.
> >
> >If this is a strong argument, I hate to think what a weak one would be.
> >This "argument" fails to support the critical claim that conscious
> >minds are not algorithm-based and fails to show that algorithm-based
> >methods cannot achieve something achieved in some other way.
> >It even contains its own refutation: neural nets are commonly simulated.
> >
> I am aware of the fact that I didn't give background for this
> argument. I am indeed to blaim for not looking this up.

So you believe that there is a strong argument but you can't give it?
Sounds like religion to me.

> I was mainly interested how this argument would be responded to,
> since I am not very sure about all this myself yet.

Well, I pointed out that no argument was given that minds are
not algorithm-based, and that even if it were stipulated that they
aren't, the conclusion that digital computers will never gain
consciousness doesn't follow.  What more needs to be said, other
than digging out some references on rhetoric that might better prepare
people to distinguish strong arguments from weak ones.

> I have already read some interesting responses, though.
> BTW, I left out neural nets on purpose. I don't feel informed enough
> about them to start discussing about that too, although it is my wish
> to be so in the future.

Since NN's are not considered to be algorithm-based, but are simulatable
algorithmically, they are a handy refutation for the argument (but not
necessarily the conclusion, which could well be true despite the poor
argument given to support it).

> >OTOH, "computer hardware" includes things like photosensors
> >that perform important functions that their simulations don't.
> >
> This is not very clear to me. Could you be a bit more specific?

An implementation of a moon rover algorithm cannot rove the moon with
only simulated inputs; it needs real transducers to do the real job.
(Some people argue that, analogously, no implementation of an algorithm
can be conscious without having real inputs, generally by studiously
avoiding the quality that David Chalmers refers to as "organizational invariance" -- simulations of chess players play chess
(organizationally invariant), whereas simulations of paper shredders
don't shred paper (not organizationally invariant).)

<J Q B>

More information about the Neur-sci mailing list