Rickert on embedded computation (was re: science of consciousness.)
Anders N Weinstein
andersw+ at pitt.edu
Tue Apr 28 18:27:56 EST 1998
In article <6i5fj2$a7c at ux.cs.niu.edu>, Neil Rickert <rickert at cs.niu.edu> wrote:
>andersw+ at pitt.edu (Anders N Weinstein) writes:
>>But it seems to me the problem arises in any case, insofar as you
>>are aiming to functionally explain something as a *computer* at all.
>Surely, that the device is called a *computer* is no more than a
>matter of social convention.
But I didn't actually say anything about something being conventionally
*called* a computer. I was talking about a stance you can take towards it
in explaining its behavior, roughly a species of Dennett's "design stance".
In general, I would suggest there are two sorts of computers, artifactual
computers, whose computational description does depend on reference to social
and historical factors, and -- possibly -- natural computers, whose
computational description depends on natural teleology, independent
of any human conventions. (Of course if you do not believe in
natural teleology, e.g. you do not believe that the heart is a natural
pump that *mal*functions when it does not pump well, then you would
not believe in natural computation).
>No, I disagree. In fact this was the sort of thing that the
>disagreement between Bill Modlin and me was about. We can say that
>something is a computation without having to map it into the action
>of a formal Turing machine.
If you are including "analog" computation, that may be true. But if
you mean digital or symbolic computation, I don't see how. Of course
it need not be a *Turing machine* as opposed, say, to a Prolog machine
or production system. But it has to be something to which the
theory of computation applies, doesn't it?
> From my perspective, the Turing theory
>is that of an idealized mathematical model of computation.
Ah, so you are treating it as a kind of physical theory, an idealized
model of the actual (and counterfactual) state transitions in a
physical system. As if theory of computation were a special branch of
On this view I ought to be able to look at any physical system at all, the
tides or the goings on in stomach, and say: that's a computation, an
AND-gate, perhaps, and the rightness of what I say is to be judged
solely by its predictive power.
That seems like an OK concept, but I don't think it matches the use
of "computer" in actual use or in use in cognitive psychology.
I think the actual use is much more of a *normalizing* explanation,
closer in spirit to functional analysis of bodily organs, say.
This way of thinking loses the normative force packed into the
concept of computation.
Of course, it may be that one wants to say that there are no norms
literally binding on natural objects, and so no natural computers, only
a metaphorical analogy between these systems and persons. Then the only
process that could *really* be a computation would be the sort of
things that persons do when they total a check or do long division, in
accordance with a rule-governed technique. In those cases, the norms
are binding on the person, in a way that has no parallel in natural
scientific description. For when we say someone is totalling a check,
we are not simply predicting what will happen next in the world as we
do with a physical system. We mainly determine what they *ought* to do
next if they are to do the addition *correctly*. A failure to see this
outcome does not automatically indicate a a flaw in our "model", but
might betoken a flaw in the person doing the addition.
> It is not
>a constraint on any actual computation, that it is required to
>conform to the idealized model.
I am not sure what you mean by "not a constraint". Of course there are
no constraints on any object considered as a physical system. For
qua physical object there is no room for a notion of malfunctioning or error.
Yet for something taken as a computer, there is a difference between
proper and mal-functioning, and between physical-level interference
and programming errors.
>>Remember, described solely as a physical object, a system can never
>>*mal*function, it just does what it does with no right or wrong about
>>it. In explaining it as a computer you explain it by reference to certain
>>*norms*, that determine a difference between correct and improper
>>operation, and between hardware and software failures.
>Agreed. But we do not need an abstract model of an automobile to be
>able to say that a particular automobile has malfunctioned.
That does not seem true to me. You do not need an abstract
*computational* model, that is true. But you sure need something
quasi-abstract, a *design* (that which is given by a functional spec).
For everything we count a malfunction is 100% in accord with the basic
laws of physics, and might be perfectly predictable.
>Similarly, we do not require a Turing machine model as a standard for
>determining whether a computer has malfunctioned. In both cases we
>would be more concerned with whether the system behavior is in
>accordance with the manufacturer's specifications, whether those
>specifications are explicit or implied.
I don't get the distinction. What is it a specification conveys to you,
if not (what you consider) the abstraction the system is supposed to
instantiate? In the one case, it is a computation, in the other, not.
But can't we say that
abstract computation : physical computer ?
>I would say that it it is not an intrinsic fact about anything, that
>it is a computation.
Fine with me as long as it may be an objective fact nonetheless. It would
just be a relationally determined fact, like the objective facts about
my social and economic statuses.
More information about the Neur-sci