human vs machine
ui22204 at sunmail.lrz-muenchen.de
Thu Aug 10 12:25:45 EST 1995
On 9 Aug 1995 jkinner at PROBLEM_WITH_INEWS_DOMAIN_FILE wrote:
> M. Ferber (Ferber at zoology.uni-frankfurt.de) wrote:
> : tgmk at aol.com (TGMk) wrote:
> : >Any fool can see that with enough elements one could make a brain
> : >out of non-biological materials. But how many would you need? That
Though you are right, I would not say "any fool can see". E.g. the
number of elements, the volume they occupy and the power they would
dissipate if implemented in current or mid-future semiconductor
technology can render such implementations impossible since the
amount of resources required is so large. (I am talking megawafers
and several 100 megawatts power dissipation here, and a room-sized
fluid-cooled hardware box).
> : >is, how many neurons are in the (typical) human brain? We have an old
> : >biology book that says *at least* 100 billion but I'll bet there are >more.
Most books say things between 10 and 15 billion.
> : >And how many connections do they each have? Of course, this is basic
I think the average divergence factor is about 10k, top convergence values
up to 100k are known. Or is my data obsolete?
> : >AI stuff but I'm too lazy to go to the library. Pls send reply to
> : >TGMK at aol.com. Thanks!
> [Description of inset neurology deleted]
> : The problem mentioned by tgmk (whoever he or she is) is not simply
> : related to the number of neurones. The fact is that the connections
> : between the elements of any nervous system are specific. Furthermore each
> : element (neuron) within a nervous system is connected to many other
> : neurones. There are no simple 1 to 1 connections. For the human brain an
The elementary computational element of the brain is the synapse, not
the neuron body.
> : average of 1000 connections per neuron has been suggested. This togerther
> : with the 10 exp 12 (1.000.000.000.000) neuornes makes 10 exp 15
> : connections or synapses. Is it really possible to put such a number of
> : elements together to a synthetic brain. I think it is not.
The mere number of computational elements is not the problem. E.g. the
U.S. telephone system, the largest machine mankind ever built, is bigger.
The problem is a relatively small hardware box dissipating modest
amount of power, containing such amount of switching elements. I'd
say this can't be done with semiconductors, this demands molecular circuitry.
> : Regards
> : Micahel Ferber
> This is an interesting question. I think that, in theory, few people
> disagree that it would be possible to model the neurochemical system
> (in the worst case) that a brain uses to go about its business. This is
In fact, _most people_ disagree, even (neuro)scientists.
> NOT to say that we have the capabilities for this scale of modelling
> available at this time. Instead, I would suggest that some day such
> a model might be plausible.
You are absolutely right. The largest maspar supercomputers
currently do not exceed the computational equivalent of a Drosophila m.
> On the other hand, most people seem to forget that digital computers
> are a vastly different medium than wetware. Digital computers just
> aren't good at analyzing and simulating the analog signals involved
> in psychophysiology. Plenty of people may argue that "neurons only
If you understand under computers "currently built algorithmic, strongly
von Neumann flavoured machines", you are completely right. However, this
is a brittle and weak way of modeling things, and this paradigm is currently
in the process of being abandoned (connectionist AI, evolvable hardware,
cellular automata machines, genetic algorithms, etc.)
> give off spikes, which can be represented digitally." That's all
> well and good, but what about the other factors? What about cell
> physiology and neuromodulators and hormones that affect the spike
> rate? How are we to measure these parameters in a digital world?
I do not think that this cannot be reduced to a modest vector of
few-bit elements. E.g. spike travel delays, cell connectivity (list
of edges), synapse sign and value, baseline modulation, etc.
If these things are known en detail, we will be able to model them,
even if it takes some resouces (e.g. 0.25-0.5 Mtransistors for one neuron).
> I would answer that we shouldn't. If machines will think, it will be
> it their own unique way. There is no way to sequence the DNA of a
Your gene set, your CNS connectivity is vastly different from mine.
Yet I do not think you are thinking in some way mysteriously different
from my own. Do not fall prey to the old mind-body fallacy. The
mind does not know anything about the hardware it runs on.
> computer, so why should we assume that they will think the same way
> that we do? Hell, they can even do MATH better than we can. :)
An idiot savant can perform drastically better than the average human.
The ability of twiddling numbers is a coding and processing thing,
which is way off the implementation level. We were not evolutionary
tailored to do numerics, that's why we have such trouble doing it.
> And remember, a good algorithm is worth an infinite amount of space.
> Can anybody remember all the whole numbers? Doubtful. But I'm sure
> most of you know the tried-and-true algorithm x_n+1 = x_n + 1.
> Some problems may not be reducible in this nice way, but who knows
> for sure?
Paper/pencil maths is a model, not the reality. Reality sometimes
does things in untidy/dirty ways. Math would be (almost) worthless,
would not there be computers, grinding out numerical optimization.
Most brilliant mathematicians of the past spent decades of their
lives as computers in the original sense of the world: persons
doing calculations. Now we have Maple and Mathematica, now doing
in minutes what it took centuries in the past.
> -Jason Kinner
> (jkinner at omni.voicenet.com)
More information about the Neur-sci