Toward a Science of Consciousness 1998
modlin at concentric.net
modlin at concentric.net
Fri Apr 24 09:32:21 EST 1998
In <Pine.SV18.104.22.1680424064603.647B-100000 at sleepy.giant.net>, Brian J Flanagan <bflanagn at sleepy.giant.net> writes:
>On 23 Apr 1998 modlin at concentric.net wrote:
>> Hardware design is important in a lot of practical ways. A design must
>> provide devices and channels for information to come into the system and
>> out of it... sensors and effectors, in biological or robotic terms.
>> But hardware design has absolutely nothing to do with the kinds of
>> things that can be computed,
> Architecture affects practical issues of
>> performance, but makes absolutely no difference to what is possible if
>> we provide enough capacity and don't care how long it takes.
>BJ: No, this is only a silly dogma spawned by AI types.
It's not an "AI" idea. It's one of those things that is basic to the
notion of computing. A guy called Church formalized it quite a while
ago, but it is pretty obvious for discrete digital computing (once
you've realized that no discrete architecture can transcend a Turing
Machine), and more subtly obvious for all machines of any type.
>> The difference between any computing machine and any other computing
>> machine is only a matter of programming.
>BJ: More of same. The architecture has crucially to do with what kinds of
>sensory input can be operated upon.
I don't know how you mean this.
It could be the same point I made in the paragraph you quoted and agreed
with above... that hardware is needed to provide sensory input to (and
effector output from) a computer system. It's true in that sense, that
a computer can't operate on input that it doesn't have. But then
why post it as a disagreement?
It could be an assertion about practical performance issues. We
probably do in practice need some highly parallel architecture to keep
up with the sheer volume of data involved in a realistic modelling of a
world reported through a full sensory interface, and architectures which
implement primitives directly related to the functions we need are
faster than those which have to build up to them from a different set.
But if you really mean to say that the architecture used for the
computing itself makes a difference to what can be computed, given the
necessary input and ignoring performance... then I respectfully suggest
"Computing" is generating functions as combinations of other functions
given as primitives. Any computing architecture capable of a few very
basic operations can compute the primitives of any other, and thus can
go on to compute any function computable by the other.
At least according to Church and me. <g>
More information about the Neur-sci