In <01bcefc9$33b20160$e67a61ce at asdf> "ray scanlon" <rscanlon at wsg.net> writes:
>Neil Rickert <rickert at cs.niu.edu> wrote in article
><64a8qu$abu at ux.cs.niu.edu>...
>> In <01bceec8$7cb77900$fd7a61ce at asdf> "ray scanlon" <rscanlon at wsg.net>
>> >Connectionists look to the brain, the net of neuromimes is their
>> But the tools the connectionists have are too weak, given the
>> magnitude and complexity of the task.
>Not in my opinion.
Go at it, and prove me wrong.
>Are we looking to explain awareness?
I assume so.
> Is it our goal to construct a
>brain that has awareness inherent in its architecture and thus explain
>the world of intension in extensional words? You said that you don't
>believe this possible so let me just put it aside.
I don't recall saying any such thing. I believe it is possible in
principle, but the practical difficulties will be so large that I
doubt they will be overcome.
> What remains? Are we
>confusing a multitude of neurons with complexity?
When the problem is posed in terms of the detailed structure of the
neural net, the complexity is enormous. From a top down view --
analyze the problem, rather then the biological hardware -- it does
not look as complex. But you are insisting on the bottom up
>What is needed is the ability to generalize, the ability to clear off
>some trees so we may have a clear view of the forest.
But I think you have made it very difficult to generalize by
insisting on a bottom up neural net methodology.
> Those jellyfish
>that have advanced to the stage where they have interneurons between
>the sensory neurons and the motor neurons do not seem to pose much of a
>problem to being simulated with a neural net. Yet the whole story is
It is not at all clear that the jellyfish neural net is comparable to
the vertebrate neural net. The jellyfish has to solve very different
kinds of problems.