death of the mind.

David Longley David at longley.demon.co.uk
Tue Jul 20 16:36:33 EST 2004


In article <40fd8510_7 at news.athenanews.com>, Sergio Navega 
<snavega at intelliwise.com> writes
>"David Longley" <David at longley.demon.co.uk> escreveu na mensagem
>news:C$r7w1IXxV$AFwwZ at longley.demon.co.uk...
>>
>> You haven't *really* addressed my question. You suggested that this
>> technology should unsettle Glen. But people like Glen and I have spent
>> time "peeking" at the brains of animals by inserting cannulae or
>> electrodes like many other behavioural neuroscientists and recording
>> animals' operant behaviour (neurosurgeons have done it quite directly
>> for decades). What's new about fMRI etc is that it makes some of this
>> easier, especially for those who don't know enough about brain and
>> behaviour to look at it with circumspection. It misleads a lot of
>> people, especially those cognitivists who are undisciplined mentalists
>> or intensionalists in my view.  The technology is just a new technology
>> in a long line of technologies and one of my tacit points was that when
>> others here have said things similar to what you said in your post, it
>> has just betrayed their (all too common) naive conception of radical or
>> evidential behaviourism. When a person makes a verbal or other response
>> when someone is looking at an fMRI image etc, how does that
>> fundamentally differ from when someone records what a field electrode
>> etc picks up in a freely moving animal on some schedule or other
>> 'behavioural assay'. It  doesn't unless one is a closet mentalist
>> looking for "meaning" in what the subject says or does. This is what
>> almost all "cognitivists" including "cognitive neuroscientists".  People
>> (including themselves) are often just seduced by their proximity to
>> physical "brain talk"!
>
>My comment that follows will perhaps use an often repeated argument,
>but I see no alternative at the moment. An AI researcher may
>eventually be interested in understanding human behavior, but
>what is essential to such a person is to comprehend the *mechanisms*
>behind such behavior, because his/her task is to devise an
>artificial mechanism capable of performing (behaving) similarly.
>Thus, this leads to what I think is an overexposed argument: the
>hardware/software question. For the sake of improving the idea,
>let's consider a group of aliens which steal my computer. They
>will do anything to understand how it works, because they want
>to build their own.
>
>The "hardware aliens" will analyze the boards, chips, cables
>and connectors, trying to look for essential principles of
>operation. The behaviorist aliens will annotate all reactions
>of the computer to given stimuli. The "abstract aliens" will
>notice patterns of behavior of that machine. But instead of
>just annotating these patterns of behavior, the abstract
>aliens will, after some time, start to hypothesize that the
>windows showing in the monitor of the computer seem to be
>individual instances of programs (which are, by themselves,
>abstractions). From this simple hypothesis (that was developed
>because of experimental observation) they will *deduct* that
>it is necessary to have a central program which coordinates
>how much processor time is spent in each of the windows.
>The real value of such a deductive model is to be capable of
>giving us some *predictions*. One prediction of this abstract
>model is that if one opens too much windows, the speed of
>the program in each window will be reduced (because of
>the shared processor hypothesis). One can experimentally
>check this prediction, and if this doesn't correspond to
>what is measured, then the model is *wrong* and should be
>rejected.
>

They may well *deduct* that if they're radical or evidential 
behaviourists (there's a good chance that they will be if they've 
managed to get this far). I'd guess they'd never have to consider such 
notions in the first place being extensionalists.

>To the behaviorist aliens (a funny idea indeed...) it will
>reject the notions of "operating systems", "time slices",
>"protected data spaces", "interrupt service routines",
>"high level languages", and a lot of other abstract concepts
>created by the cognitive aliens. They dismiss these ideas
>based solely on the fact that such things don't show directly
>as behavior, being just figments of a cognitive alien mind
>(sorry, brain).  However, these concepts are essential for
>anyone doing creative development in software engineering.
>Without entertaining such notions, one would hardly get a
>deep vision of all one can do with software. In other words,
>if Bill Gates was a behaviorist, he would be selling orange
>juice at fifth avenue.

How do you know? if he said he was would it make any difference?

>
>Although the software/hardware distinction is a bad analogy
>for the mind/brain distinction, the idea is to have different
>levels of analysis, provided that one obeys basic scientific
>practices in all these levels.
>
>Sergio Navega.
>
>
>P.S: As an aside, let me mention a passage that I find hilarious,
>although it is a bit against my prior argument. I read this as
>an introductory quote to a paper about the use of metaphors and
>analogical reasoning (and, of course, its misuse). This is the
>situation: a physicist was invited to give a speech to a group
>of simple farmers interested in improving the yield of milk of
>their cows. Here's how the physicist started his speech:
>"Let's start by considering a spherical cow..."
>
>
>
I do understand your posts (although I clearly don't agree with what you 
write). It doesn't look like you've understood mine (I've edited the end 
of the one above to correct the sloppy writing in the last two 
sentences, but I suspect that won't make much difference). You don't 
*appear* to understood what you've read by Skinner or Baum either.  Do 
you appreciate what's radical about radical (or evidential) 
behaviourism?

(PS. Do you appreciate that you are Ozkural's favourite net poster?)
-- 
David Longley



More information about the Neur-sci mailing list