Michael Edelman writes:
> > Yes! And you can add ephaptic and hormonal effects. I say we must work with
> > what we have. If you insist on the life history of every molecule you are a
> > latter day Luddite.
> That's rather a straw man argument. I do no insist on the role of every
> molecule- but I insist on a model that takes into account the function of every
> salient feature.
While a MD level simulation of the brain is no small beer even with
hitherto hypothetical molecular circuitry, one should point out that
(short) 1 billion atom MD simulations were feasible in 1995. That's
not too far from a cubic micron. However, femtoseconds are not milliseconds...
> Earlier I cited the failure of artificial hearts built on models of the heart
> as a pump, that failed to take into account the feedback mechanisms that
> regulate heart function. An incomplete model of the brain will not yield a
> complete model of its function.
> Your model may work for the mechanistic model of mind you propose, one that has
> no place for conciousness, but it may not be complete enough to model aspects of
'Consciousness' is a high-level description of a large number of
low-level phenomena. There is nobody else at home.
> mind that many of us think are central to the brain's purpose.
A brain has no purpose. It is an evolved, not a designed structure.
> Why not? You've got perhaps 2x10^11 cells in the brain, give or take a factor of
> 2x10^2, which is certainly not a number inconceivable of realizing in hardware,
> given that today's chips have something like 6x10^6 discrete devices, and
'Discrete devices' as in 'transistors'. Ludicrous comparison. How many
million transistors do you need to represent a single neuron?
> fine-grained processors are being designed with 10^4 or 10^ processing elements,
> each of whihc can model another 10^6 or more virtual elements. So we're not that
> far off.
Far enough so that you can't implement it in semiconductor photolitho.
> What I'm objecting to here is your conception of the brain as a device with a
> very predictable, top-down sort of structure. Of course my central issue here is
> your rejection of mind, putting you solidly in the behaviorist/positivist
The word 'mind' is not very meaningful. It smacks too strongly of 'soul'.
> school. You're seeking to build (metaphorically, lest you think I'm tying this
> to hardware) a brain that to me is just an automaton, with a rather large parts
> count. What do you hope to explain with such a model? What is the purpose of
The number of parts alone is not very meaningful (though states^number
grows very large very quickly), but then there is the complex energy
landscape (system Hamiltonian). Taken together, I wouldn't call this a
> your model, and how does it differ from an ordinary computer, apart from the
There is no such a thing as an ordinary computer, nowadays. Computers
control military systems, recognize faces, drive a car from coast to
coast. With evolvable hardware, there's no telling what they are going
to be able to do tomorrow.
> We seem to have an irreconceivable division here. Every time I argue for the
> brain as something more than a deterministic automoton, or argue for the
> consideration of commonly accepted phenomena such as self awareness, who accuse
> me of being a luddite or making an appeal to metaphysics.
> That's silly. We're all self-aware. You aren't an automoton. Who am I debating
> with? What are dreams?
This is silly. Self-awareness and dreams are high-order phenomena. You
don't see turbulence in the MD liquid equations, yet they emerge
spontaneously. One could argue there's nothing but neurons spiking,
everything else is devil's handiwork.
> To equate soul and mind is to claim questions about mind are metaphysical ones-
> but they're not. One can investigate the nature of conciousness through
> controlled and repeatable experiments. You can't do that with the soul. *That's*
> metaphysics. Unless you accept the reality and reliability of self-report, you
> can't even do the kind of research program you're talking about.
> What's your data? Suppose you've got a ton of really good single-unit records
> for every neron in the brain, and you can trace activity all the way from retina
> to cerebral cortex and every path along the way. What have you got? Nothing,
> unless there's a correlation to subjective experience. You have a machine, and
> machines, as far as we know, don't ask questions about the nature of other
> machines. And that means you're not a machine, either.
What you're practising here has got a long tradition in the Western
school. It is called sophism.
> You'll never explain brain without explaining mind. Can you describe the
Uh, isn't this the other way round?
> function of a computer in the absence of the existence of any software?
Yes, but the description would not be very meaningful. However, in
theory I could specify the entire state transition, which would
describe the system exhaustively. In theory.
> Here's a little gedanken experiment: Suppose no software exists, but computers
> do. (Never mind why this would be the case). We have all these computers, and
> they're all fully described, with every possible state mapped and documented.
> Have you explained the computer? No.
You've become trapped in Searle's Chinese room. You _can't_ document
every possible state and transition of meek desktop PC, there are
simply not enough atoms in all alternative universes to encode it.
>>2^10^9 states is an absurdly large number, as you know.
In this gedanken, you would have covered the space of all possible
programs, quite a feat.
> If you could come up with a complete diagram of one brain, and a table of all
> possible states of that brain, do you have an explaination of that brain? No.
Yes, of course. With that information, you could predict anything
about that particular brain. Which should be 'explanation' enough.