IUBio Biosequences .. Software .. Molbio soft .. Network News .. FTP

machine brains

Michael Edelman mje at mich.com
Fri Feb 12 08:54:04 EST 1999



Joe Kilner wrote:

> ...
> >Hm. Well, one function of a model would be to try to understand how
> conciousness
> >arises. But then, would we be able to recognize it?
> >
>
> No but, if we make a perfect replica of a human brain (and it worked as we
> expected it to) then our inferance that it was concious would be every bit
> as (un?)justified as our inference that other humans are conscious.

I'm not sure I accpet that, for the reasons we discuss:

> >Wittgenstien says that you can't have language without a community of
> speakers
> >exchanging symbols about a common experience- so if you have one smart
> machine,
> >how would it tell you it was?
> >
>
> Exactly the point I am making elsewhere in this thread.  Without a common
> experience we are talking about a "private language" and as no one can check
> that you are using this language correctly it is, technically, meaningless.

I think we have a lot of agreement here, which leads me to ask:

Even if we build an atom-for-atom replica of a human brain, how do we know it's
concious unless we give it a human sensory system, let it hang around for a
dozen years, and then ask it if it's concious? Does the progam arise
automatically from the hardware?

And we also have the question of whether private languages exist.

> >>  We can replicate human brains without necesarily understanding
> >> *how* they work, and through this replication may gain more insight than
> we
> >> ever could through direct study.
> >
> >How do you suppose we do that? We can't even replicate a human heart, a
> much
> >simpler device, despite 50 years and countless millions of dollars. You
> have to
> >understand all the functions of the brain to replicate it. We're just now
> >learning about the function of nitrous oxide as a wide-area
> neurotransmitter.
> >There may be other communication modes in the brain we have yet to
> understand.
> >
>
> I am talking in terms of possibilities here not practicalities.  I am saying
> that you do not have to understand all the functions of a brain to replicate
> it - just the physical ones.  My implication here is that there are
> functions arising due to the network aspects of the brain which we do not
> need to understand in order to replicate (in the way that you do not need to
> understand C++ code to copy it out).

While you can replicate a program without understanding it, at one level,  you
*still* need to know what's salient and is to be copied. If I hand a naiive
alien a FORTRAN listing from 1971, he still may wonder if the green bars on the
paper are part of the program, or the tractor holes punched on the margins, etc.

Copy ing that C++ program still assumes the copier knows a lot about the
representation- a naiive copier may "correct" the punctuation based on their
notions of what constitutes correctness, just as someone copying a brain doesn't
know, for example, if they're copying salient features, or  if they're copying
standard structure or pathology.

In the case of the artifical heart, the general approach has always been to
model the heart as a pump, but the last round of human trials pointed out that
being a pump isn't enough- there are all sorts of regulatory mechanisms embedded
as well. We thought we had a complete functional analysis of the heart, but we
were really very far from that point.

Or going back to the computer comparison, if we hand a transistor or a  TTL chip
to a 1920s EE, he may analyze the device in terms of terminal-to-terminal
inductance, resistance and capacitance without ever realizing there's another
level of functionality embedded in there.

>  I agree - without understanding the
> physical aspects of the brain we can not replicate it - but it is possible
> to understand the brain completely at a physical level (maybe not in the
> near future) and so it is possible to replicate the brain, potentially
> without a full understanding of the "software".

 As you see, I'm arguing that what we might conceive as naiive copying of the
hardware still involves assumptions about function.

--
Michael Edelman     http://www.mich.com/~mje





More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net