In <01bceec8$7cb77900$fd7a61ce at asdf> "ray scanlon" <rscanlon at wsg.net> writes:
>There is a fundamental split in the approaches to designing a machine
>that can think, shall I take the mind or the brain as my model?
>Artificial intelligence looks to the mind,
> , its practitioners speak of
>symbol manipulation, the predicate calculus is their tool.
The major problem is that too many of them see symbol manipulation as
their world, instead of it merely being one of their tools.
>Connectionists look to the brain, the net of neuromimes is their
But the tools the connectionists have are too weak, given the
magnitude and complexity of the task.
>Artificial intelligence is philosophy, connectionism is science.
Artificial intelligence is science, but science with science
presented with hype that far exceeds the achievements. Connectionism
is mainly wishful thinking, with a touch of science.
> Can I
>combine the interior world of intentionalism with the exterior world of
>extensionalism under one rubric?
No, you can't.
> David Chalmers says I should try, he
>is an optimist.
David Chalmers can't either.
> Colin McGinn says no, I lack the needed machinery. He
>is a pessimist.
McGinn is more realistic.
>I can at least be honest with myself and not confuse mind with brain,
>soul with body. For instance, when the AI worker speaks of
>connectionism as sub-symbolic manipulation, he does just that.
The term sub-symbolic is used more by connectionists than by symbolic
> I would
>say it should be more fruitful to view the neural net as a passive
>filter that does not manipulate or process anything. There is no
>information present, no data packages, no labeled lines, no affective
>computation, no agent.
In that case the neural net does nothing, unless there is something
at the other end of the passive filter. What is that something?
Surely, if you are right, then connectionism is of only minor
importance, and what must be studied is that something at the other
> If I must anthropomorphize then I will see
>things from the point of view of the neuron--Pulses come in and pulses
>go out, the neuron lives in a 'Chinese Room.'
If the neural net is a passive filter, then the neuron should have no
point of view.