No subject


Sun Apr 10 21:52:51 EST 2005


is poisonous monsters, interested in nothing, but maximization
of the rate of sucking.

Future? What future?

>  Yes, it would
>be "odd" to say that the fellow in that room "understood" the questions
>or answers, but it is dishonest to neglect to say that this is a
>metaphore for a primitive concept of brain function: somewhere in
>there, there is a little man who "uses" all the information being piped
>in to him, i.e., who does the "thinking", who is "conscious".

No, my humble bio-robot manufacturer.
This is not some little donkey coca cola can with a lable,
which you call a "little man".

It is BIG. Real big.
And it is multi-dimensional.
And you have not even begun to comrehend what and who you are.

And again, the hell will freeze over,
before you can prove that there is no essense to your being.

You know what are you trying to prove here?
You are trying to prove that this is a good time to commit suicide.
You see, there is nobody home.
You are just a machine.

Here, for yer royal records.

Bio-robot:
Biological entity,
programmed to behave according to a limited set of instructions,
based in morality ["good" and "bad" definitions],
created by the priest
to manipulate your fear and guilt
in order to collect a sin tax.

>Obviously, the ROOM (with the little man's help) understands Chinese
>very well.

Sure, and now we are on the level of complete obscenity,
aren't we?

Zo...

What DO we "know" about what?

>Searle, of course, will simply say "if it's not a little man, it can't
>be conscious, and it can't really understand, no matter HOW much it
>seems to".  Well, maybe so, maybe no; but there is no a priori reason
>(other than unexamined mental habit) to believe so, nor any empirical
>reason to do so.

>F. Frank LeFever, Ph.D.
>New York Neuropsychology Group
>

>In <37846544.15051A6D at zedat.fu-berlin.de> Wolfgang Schwarz
><wschwarz at zedat.fu-berlin.de> writes: 

>>salut,

>>"F. Frank LeFever" wrote:

>>> Let me pluck this one thing from the romantic rhapsody: I think I
>see
>>> something like the scandalously dishonest use of an implicit (never
>>> explicitly defined) concept of "consciousness" (maybe not even a
>>> concept, maybe just a sentimental bias) pervading that new growth
>>> industry, symposia on "brain and consciousness" (Searles, et al.)
>>[...]
>>> He seems to be saying, "you can't analyse or duplicate intelligence,
>>> because you just can't.  I don't care how intelligent it seems to
>be,
>>> if it's not natural intelligence, it really isn't!"  And then he
>goes
>>> on about how wonderful it all is.

>>*lol*
>>I somehow agree with you on that point. But certainly it's not that
>>easy. 
>>Searle has some famous arguments on his side, e.g. the chinese room
>>argument [1]:
>>Briefly, imagine someone who understands no Chinese being confined in
>>a room with a set of rules for systematically transforming strings of
>>symbols to yield other strings of symbols. As it turns out, the input
>>strings are Chinese questions, and the output strings Chinese answers.
>>Nevertheless it would be odd to say that the person in the room
>>understood any of the questions or answers.
>>Therefore, rule-governed syntactic manipulation of symbols is not
>>sufficient for understanding.

>>Anyway, I think the most plausible definitions of intelligence are
>>functional definitions, and there is no a priori reason to doubt that
>>some machine could perform the necessary functions. After all, the
>>Chinese room (including the person in it) is an intelligent system,
>>whatever else is missing.

>>As for consciousness, it seems that one just begs the question if one
>>seeks a functional definition. The difference between a conscious
>>system and a system that lacks consciousness is in the first place not
>>that the former can perform actions which the latter can not, but that
>>"there is anything it is like to be" the former system, but not the
>>latter [2].
>>This is of course far from a definition.

>>cu,

>>Wolfgang.

>>[1] John Searle: "Mind, Brains, and Programs", Behavioral and Brain 
>>    Sciences 3 (1980):417-457
>>[2] Thomas Nagel: "What is it like to be a bat?", Philosophical 
>>    Review 83 (1974):435-450
>>    Ned Block: "On a confusion about a function of consciousness", 
>>    Behavioral and Brain Sciences 18 (1995):272-287
>>    David Chalmers: "Facing up to the Problem of Consciousness", 
>>    Journal of Consciousness Studies 2 (1995):200-219

>>-- 
>>homepage: http://www.wald.org/wolfgang
>>"Wo kaemen wir hin, wenn jeder sagte: 'wo kaemen wir hin?' und keiner
>>ginge, um zu sehen, wohin wir kaemen, wenn wir gingen?"




More information about the Neur-sci mailing list