"casey" <jgkjcasey from yahoo.com.au> wrote in message
news:c9a82cfa-ef63-4420-96a7-8ef5d9132bfd from d21g2000prf.googlegroups.com...
On Mar 8, 8:13 am, "Glen M. Sizemore" <gmsizemo... from yahoo.com> wrote:
> Below is a link to a very cool paper. Whether the simple network on page
> 334
> is "correct" or not, the flavor of the paper foreshadows, I think, the
> future of psychology, neuroscience, and artificial intelligence, all
> rolled
> into one. The fields are mutually complimentary and, I think, there is no
> other way.
>>http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1284800&blobtyp...
JC: The other way might be inventing a machine that behaves
intelligently just as we invented flying machines without
the need for flapping wings or feathers.
GS: Be my guest.
JC: When I tried to talk about simple networks you dismissed
them saying, in essence, you weren't interested as they
didn't cover conditioning in all its complexity.
GS: The difference is that your description had no complexity. You were
going on about simulating only the definitional properties of conditioning.
When I was playing around with neural nets I did that in the first 3
minutes. Donahoe's group have gotten the network to do many things and now
they have simulated revaluation.
JC: Taken from above cool paper.
"An artificial neural network need not incorporate all
potentially relevant information from neuroscience, only
the minimally necessary constraining and enabling features
to accommodate the behavioral relations being simulated."
"... artificial neural networks are constrained by a subset
of the relevant biobehavioral principles, precisely the
subset that permits the phenomena of interest to be simulated."
GS: The key term is "phenomena of interest." Who would be interested in a
network that merely simulated the definitional properties of conditioning?
Does it show faster reacquisition than acquisition? Blocking? Overshadowing?
Sensory preconditioning? Higher-order conditioning? Generalization
gradients? Peak shift? Fading? Spontaneous recovery? There are still a lot
of things that Donahoe's people haven't shown. I would like to see them take
on schedule effects. But, still, they have accomplished a lot. And, of
course, they have carried forth the notion that the facts uncovered by
behavior analysis are what must be accounted for. They, however, at least in
their papers concerning the network, have not made the case that behavioral
principles explain all of the complex behavioral phenomena that we see in
humans. That's already been done.
JC: It seemed to be about conditioned aversion, the so called
Garcia effect, that I have read about in easier to read
and less boring style then the "cool paper" you refer to.
GS: Conditioned taste aversion was used to show that one could weaken an
operant response by operations that involve only the reinforcer. CTAs are
not an essential part of revaluation - they are merely what was used in an
experiment.
JC: Maybe you can explain it, in less technical terms than the
paper, so that the views and mechanisms of this neural network
model might find a wider audience?
GS: Seems reasonably straightforward to me.