IUBio Biosequences .. Software .. Molbio soft .. Network News .. FTP

machine brains

Michael Edelman mje at mich.com
Mon Feb 22 09:17:09 EST 1999



Eugene Leitl wrote:

>  > Your model may work for the mechanistic model of mind you propose, one that has
>  > no place for conciousness, but it may not be complete enough to model aspects of
>
> 'Consciousness' is a high-level description of a large number of
> low-level phenomena. There is nobody else at home.

This is a common thread in this debate, i.e., that apparant higher-level phenomena are
simply the result of summing lower-level physiological phenomena. As an argument it
would be more convincing if someone had actually produced a mechanistic theory of human
language production, which hasn't happened. It makes the assertion that we can forget
about high-level phenomena, since once we have a solid understanding of the low-level
phenomena the high level phenomena will be clear. That argument needs better motivation
than simple assertion.

There's something else that goes on when you interconnect a large number of simple
elements. Using another example of yours, you can't predict a tornado by looking at a
few molecules of O2, N2, CO2 and other gasses. And you cannot predict the behavior of
the brain by looking at a single brian cells, or even a dozen of them. Taking another
well-worn example, large colonies of simple organisms like ants or slime mold display
what appears to be purposive behavior that is not contained within any one of the
organisms. Does that mean that a large enough ant colony has some of the properties I
associate with mind? Perhaps.

>  > mind that many of us think are central to the brain's purpose.
>
> A brain has no purpose. It is an evolved, not a designed structure.

Purpose does not imply a role assigned by some outside agent. We could say the brain's
"role". But in saying purpose I am making clear that the brain has a specific reason
for existing, whether determined by evolution or whatever. The brain is not just a
piece of tissue that is accidentally conected to the rest of the organism.. It has a
functional role, and the interdependance of the body and the brain is in that sense
purposeful. In that sense, the evolution of conciousness is an important factor in the
brain's evolution.

>  > Why not? You've got perhaps 2x10^11 cells in the brain, give or take a factor of
>  > 2x10^2, which is certainly not a number inconceivable of realizing in hardware,
>  > given that today's chips have something like 6x10^6 discrete devices, and
>
> 'Discrete devices' as in 'transistors'. Ludicrous comparison. How many
> million transistors do you need to represent a single neuron?
>
>  > fine-grained processors are being designed with 10^4 or 10^ processing elements,
>  > each of whihc can model another 10^6 or more virtual elements. So we're not that
>  > far off.
>
> Far enough so that you can't implement it in semiconductor photolitho.

Assume I'm talking theory, not trying to raise investment capital. ;-) I was responding
to a remark of Ray's, who termed the idea of busiling a brain to be nonsensical. I was
pointing out that if the brain is simply a machine in the sense that he assumes we
shouldn't be that far off from building one. Give it another couple of decades.

>  > What I'm objecting to here is your conception of the brain as a device with a
>  > very predictable, top-down sort of structure. Of course my central issue here is
>  > your rejection of mind, putting you solidly in the behaviorist/positivist

> The word 'mind' is not very meaningful. It smacks too strongly of 'soul'.

Well, we have a problem here in that I'm sort of a metalist, and you and Ray et al are
firmly in the materialist school and you keep accusing me of being a dualist ;-) The
physicalists keep arguing for either  mind as a collection of behaviors, or just a
meaningless term, and I argue that "mind" is the only interesting thing about the
brain.

I started out in physiological psych as a positivist and moved into cognitive because I
saw the fallacies of positivism (ref. Ayer's  "Language, Truth and Logic") and more
importantly  I was convinced that mind was not only something worth studying, it was
the point of psychology. In fact, "mind" was indeed what psychologists studied prior to
the behaviorist invasion ;-)

>  > school. You're seeking to build (metaphorically, lest you think I'm tying this
>  > to hardware) a brain that to me is just an automaton, with a rather large parts
>  > count. What do you hope to explain with such a model? What is the purpose of
>
> The number of parts alone is not very meaningful (though states^number
> grows very large very quickly), but then there is the complex energy
> landscape (system Hamiltonian). Taken together, I wouldn't call this a
> simple automaton.

That's an important point- that the number of possible states does indeed grow
exponentially, and most importantly (I would argue) the number of connections and
pathways grows exponentially. Comparing it to a favorite model for complexity, if you
choose to model the brain on the physical level, you have the same problems involved in
modeling the weather; unless you model every particle on a one-to-one basis, you can't
encapsulate all the possible states.

>  > your model, and how does it differ from an ordinary computer, apart from the
>  > size?
>
> There is no such a thing as an ordinary computer, nowadays.

By "ordinary computer" I meant any common instatiation of a deterministic,
state-variable machine, whether single-threaded or otherwise.

> Computers
> control military systems, recognize faces, drive a car from coast to
> coast. With evolvable hardware, there's no telling what they are going
> to be able to do tomorrow.

Fallacy of extension. (Is that a real fallacy? ;-) Indeed there's no telling, until
someone does it, but you're listing problems that have describable, iterative
solutions. And that's where the heart of the debate seems to be. I would certainly
think that behavior I would term intelligent is not beyond the abilityt of some future
computer (and perhaps some present ones) but I think the approach taken by some of the
materialists just defines the problem down to an uninteresting one.

>  > We seem to have an irreconceivable division here. Every time I argue for the
>  > brain as something more than a deterministic automoton, or argue for the
>  > consideration of commonly accepted phenomena such as self awareness, who accuse
>  > me of being a luddite or making an appeal to metaphysics.
>  >
>  > That's silly. We're all self-aware. You aren't an automoton. Who am I debating
>  > with? What are dreams?
>
> This is silly. Self-awareness and dreams are high-order phenomena. You
> don't see turbulence in the MD liquid equations, yet they emerge
> spontaneously. One could argue there's nothing but neurons spiking,
> everything else is devil's handiwork.

But self-awareness is the heart of mind. Some people in this debate seem to want to
stop at some level of neural activity and call that mind without accepting that
pattersn of neural activity are anything more than patterns.

>  > To equate soul and mind is to claim questions about mind are metaphysical ones-
>  > but they're not. One can investigate the nature of conciousness through
>  > controlled and repeatable experiments. You can't do that with the soul. *That's*
>  > metaphysics. Unless you accept the reality and reliability of self-report, you
>  > can't even do the kind of research program you're talking about.
>  >
>  > What's your data? Suppose you've got a ton of really good single-unit records
>  > for every neron in the brain, and you can trace activity all the way from retina
>  > to cerebral cortex and every path along the way. What have you got? Nothing,
>  > unless there's a correlation to subjective experience. You have a machine, and
>  > machines, as far as we know, don't ask questions about the nature of other
>  > machines. And that means you're not a machine, either.
>
> What you're practising here has got a long tradition in the Western
> school. It is called sophism.

That is rather beneath you, I think. There's a critical question to be answered in the
above, and one I'll return to later. It's central to Wittgenstien's argument about
meaning and language.

>  > You'll never explain brain without explaining mind. Can you describe the
>
> Uh, isn't this the other way round?

No. Can you describe the function of a muscle without first understand what a muscle
does? Unless you understand that the "purpose" of a particular muscle is to move a
limb, you can only describe the muscle in terms of phsyiology. But that tells us
nothing about why the muscle exists- and again, "why" requires nothing more than
evolutionary mechanisms to justify it. No need to drag in a creator.

Similarly, how can you describe the design brain unless you first have a functional
description? The medievals thought that the mind resided in the heart. That strongly
influenced their understanding of the structure and function of both the heart and the
brain. If you believe that mind is epiphenomenal then that is going to dictate the
kinds of experiments and theory you construct around the brain.

>  > function of a computer in the absence of the existence of any software?
>
> Yes, but the description would not be very meaningful. However, in
> theory I could specify the entire state transition, which would
> describe the system exhaustively. In theory.

The question is then whether a table of all possible states and transitions of the
brain would be a description of the brain. As a materialist you'd have to say it was,
and I'd have to say it wasn't, for it fails to encapsulate the meaning associated with
the various states.

>  > Here's a little gedanken experiment:  Suppose no software exists, but computers
>  > do. (Never mind why this would be the case). We have all these computers, and
>  > they're all fully described, with every possible state mapped and documented.
>  > Have you explained the computer? No.
>
> You've become trapped in Searle's Chinese room. You _can't_ document
> every possible state and transition of meek desktop PC, there are
> simply not enough atoms in all alternative universes to encode it.
> >>2^10^9 states is an absurdly large number, as you know.

I think this is a different problem than the Chinese room- assume that you *could* map
a computer- have you explained it? Yes. Anyways, it's possible to build simple
computers for which every states and transition *can* be mapped.

> In this gedanken, you would have covered the space of all possible
> programs, quite a feat.
>
>  > If you could come up with a complete diagram of one brain, and a table of all
>  > possible states of that brain, do you have an explaination of that brain? No.
>
> Yes, of course. With that information, you could predict anything
> about that particular brain. Which should be 'explanation' enough.

Is predicting states the same as understanding the meaning associated with those
states? Let me give one example, the problem of qualia, and relate it to this issue.

Suppose I'm blind, and I ask someone "what color is the sky?" and I get the answer "the
sky is blue". Every time I ask this person the question, I get the same answer. Do I
understand "blue"?

--
Michael Edelman     http://www.mich.com/~mje
Telescope guide:    http://www.mich.com/~mje/scope.html
Folding Kayaks:     http://www.mich.com/~mje/kayak.html





More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net