evolutionary and computational theories of mind

yan king yin (dont spam) y.k.y at lycos.com
Mon Jan 21 03:29:05 EST 2002


"C.J.L. Wolf" <C.J.L.Wolf at ncl.ac.uk>

> > Why should there be computational ciruits in the brain? Computers have
> > only been around for 50 years, might nature not have given us
> > something slightly more honed?  The computational theory makes the
> > very big suppoition that the outputs we get from our minds are the
> > result of logical operations upon the sensory stuff that went in.
> > However, the output only seems *right* becuase everyone's is generally
> > the same or similar.  If everyone behaved totally differently then
> > that would be reagrded as the *right* output too.
> >
> I think this is rather a misconception of what computational neuroscience
> actually is. Few if any computational theorists would argue that the brain
> operates in the binary, intensely logical way that a computer does; rather
> computational neuroscience recognises that brains essentially do
> operations on information - and tries to make out what those operations
> must be.
>
> For example:
>
> If you are in a darkened room, looking at a blue object, does
> it appear blue because it reflects more short wavelength light than long /
> medium wavelength light? Or could it be that the light shining on it is
> blue?
>
> Computational neuroscience first defines what info is available to your
> brain as it tries to solve the ambiguity. For example, it may be that
> there is a little mirror like reflection on the object (specularity).
> There are other cues too - but I'm just going to talk about this one.
>
> You then describe the algorithm - look at the reflection; check that it is
> a specularity (it should be bright, and have fuzzy edges, and if you focus
> on it, then the rest of the object will be out of focus). If it is blue,
> then the lighting is blue, and the object should probably appear greyer
> than first assumed. But if the reflection is white, then the object should
> be blue.
>
> The final stage is to implement the algorithm. You could easily do it on a
> computer; that certainly isn't to imply your brain would implement it in
> the same way if indeed it implements it at all. But you could look at how
> well the computer algorithm performs and compare that to a human
> observer's performance - this is more the realms of psychology. At the
> very least, if we find that the algorithm doesn't work in its computer
> implementation, this tells us that we got the algorithm wrong, or at least
> incomplete.

I think another important thing about computational neuroscience is that it
should stick close to biology. Its no use to just invent arbitrary neural
networks not based on anatomy. Also it would be nice to do experiments
on the brain directly in order to rule out some possibilities first before
modeling.

> For a good introduction, try David Marr's book, 'Vision'. I believe that
> he also discussed evolutionary reasons why modularity is desirable. It
> centered on being able to make changes module by module - if one didn't
> have the degree of finesse that this bestows, it would be very difficult
> to make changes to one system (e.g. vision) without making changes to all
> systems (e.g. the auditory system also) that could very well be
> counterproductive.
>
> As an example, I heard that there is a sort of sheep that was bred in
> yorkshire to have short legs to save work in building high stone walls to
> keep them in. If different genes governed the development of front legs
> and back legs (or even worse - left and right legs - though this is the
> case for haggis, of course) then some rather awkward animals could have
> resulted so it's just as well that leg development is pleiotrophic. On the
> other hand, if one also wanted this sheep to have long ears so it could
> hear predators from far away, and if ear length was governed by the genes
> for leg length, then it would be very difficult to breed sheep with short
> legs and long ears, even though in the environment of a Yorkshire dale,
> this was desirable.
>
> KW

Your argument is nice, but there might be exceptions to its conclusion. In
general it is extremely difficult to make arguments from evolution, because
of its many subtleties.

Do you think there is a specific circuit for language? If so what does it
encode? I think language is just the result of some basic computational
mechanisms -- see my other post.

If the brain has too many specific circuits then its behavior will become very
inflexible. Flexibility is one advantage of a generic mechanism.






More information about the Neur-sci mailing list