Rickert on embedded computation (was re: science of consciousness.)

Neil Rickert rickert at cs.niu.edu
Tue Apr 28 15:52:50 EST 1998


andersw+ at pitt.edu (Anders N Weinstein) writes:
>In article <6i3maq$48o at ux.cs.niu.edu>, Neil Rickert <rickert at cs.niu.edu> wrote:

>>Now what if we took a completely different subset of the logic
>>inputs, and managed to design an algorithm which is properly
>>described by the particular logic inputs we have selected.  Could the
>>computer be equally said to be running this alternative algorithm?
>>If so, then there is no fact of the matter as to which program is
>>being executed.  However, if "computation" has to do with a series of

>OK. This looks like basically "intdeterminacy of computational 
>interpretation" challenge that has been floated in various forms by Putnam 
>and Searle. 

>>                 However, if "computation" has to do with a series of
>>causal operations, rather than a symbol manipulation game, the same
>>problem would not arise.

>But it seems to me the problem arises in any case, insofar as you
>are aiming to functionally explain something as a *computer* at all. 

Surely, that the device is called a *computer* is no more than a
matter of social convention.

>Of course one can describe any physical system at a physical level without
>bringing in any computational interpretation (i.e. an "instantiation
>mapping" that maps physical states and events onto those of an abstract
>symbol manipulation game of some sort). But if you want to explain it
>as a *computer*, you have to map it onto some states with syntax and
>semantics. 

No, I disagree.  In fact this was the sort of thing that the
disagreement between Bill Modlin and me was about.  We can say that
something is a computation without having to map it into the action
of a formal Turing machine.  From my perspective, the Turing theory
is that of an idealized mathematical model of computation.  It is not
a constraint on any actual computation, that it is required to
conform to the idealized model.  We generally don't expect our
idealized models to exactly correspond to reality.  Rather, the
expectation is that the model fits well enough to be useful for
theoretical analysis.

>Remember, described solely as a physical object, a system can never
>*mal*function, it just does what it does with no right or wrong about
>it. In explaining it as a computer you explain it by reference to certain
>*norms*, that determine a difference between correct and improper
>operation, and between hardware and software failures.

Agreed.  But we do not need an abstract model of an automobile to be
able to say that a particular automobile has malfunctioned.
Similarly, we do not require a Turing machine model as a standard for
determining whether a computer has malfunctioned.  In both cases we
would be more concerned with whether the system behavior is in
accordance with the manufacturer's specifications, whether those
specifications are explicit or implied.

>In the case you mention, it might be a kind of social fact that the
>computer was designed to perform one function and not another.
>Of course it is not an intrinsic physical fact about the system considered
>as a physical object, but not all facts are like that. It is a fact that
>I paid the April rent, but not an intrinsic fact about me 
>as a physical object.

I would say that it it is not an intrinsic fact about anything, that
it is a computation.




More information about the Neur-sci mailing list