Toward a Science of Consciousness 1998
jqb at sandpiper.com
Mon Apr 27 19:56:03 EST 1998
modlin at concentric.net wrote:
> In <6htajh$rq2 at ux.cs.niu.edu>, rickert at cs.niu.edu (Neil Rickert) writes:
> >You seem to have entirely missed a point I made. Namely, there might
> >be a completely different way of describing the internal operations
> >of a computer, such that under this different internal description the
> >computer is executing a completely different algorithm. If it is
> >the abstract computation that matters, then I am suggesting that the
> >abstract computation being performed is not determined by what happens
> >in the machine, in the sense that there are completely different
> >ways of assigning algorithmic descriptions to what happens physically.
> Let's focus on this point very closely.
> Consider a very simple computer. It's called an "OR gate". It has
> two external inputs, and one external output.
> The function it computes is "logical OR". If either of the inputs is
> active, the output is active. If both inputs are inactive, the output
> is inactive.
> I say that the function it computes is determined by whatever is inside
> the black box of the computer, and will remain the same no matter how
> you choose to describe it. The function will remain the same even if
> can find no use for it, or if you think it is computing NOT(NOT A AND
> NOT B). It does what it does, regardless.
> If you agree, then I'll add more inputs, and complicate the function a
> little. I'll claim that nothing has changed, that the function being
> computed is still dependent on what is inside the box, not on your
> description. I'll claim that this holds regardless of how complex we
> make that internally-computed function.
> What do you say?
I quote from David Chalmers'
where years ago he considered and analyzed the issues that you guys keep
fumbling around with:
Can a given system implement more than one computation? Yes. Any system implementing
some complex computation will simultaneously be implementing many simpler computations - not just
1-state and 2-state FSAs, but computations of some complexity. This is no flaw in the current account; it
is precisely what we should expect. The system on my desk is currently implementing all kinds of
computations, from EMACS to a clock program, and various sub-computations of these. In general,
there is no canonical mapping from a physical object to "the" computation it is performing. We might
say that within every physical system, there are numerous computational systems. To this very limited
extent, the notion of implementation is "interest-relative". Once again, however, there is no threat of
vacuity. The question of whether a given system implements a given computation is still entirely
objective. What counts is that a given system does not implement every computation, or to put the
point differently, that most given computations are only implemented by a very limited class of physical
systems. This is what is required for a substantial foundation for AI and cognitive science, and it is what
the account I have given provides.
<J Q B>
More information about the Neur-sci