Capacity of the brain

TTK Ciar ttk at for_mail_remove_this_and_fruit.larva.apple.shinma.org
Tue Aug 31 00:54:46 EST 1999


In article <37ca6624 at redeye.it.ki.se>,
Claes Frisk <Claes.Frisk at psyk.ks.se> wrote:
>Are there any reasonable estimations of the capacity(speed, memory,...) of
>the brain in computer terms?
>I realize that the two can't be directly compared, but I guess that it's
>been done a number of times anyway.

  This is all "as far as I know", and I'm a computer professional, not a 
neurologist (though I've read several books on the subject), so please take
what I say with a grain of salt.

  Additionally, it is possible that my information is out of date, and 
it is my hope that if I use outdated information in this post, I will be 
corrected by a "real" professional neurologist.  As such, it should be 
understood that I do not so much present this information as authorative, 
rather as the best information that I personally possess.  It is not my 
intention to spread misinformation, and would welcome corrections from 
those who know more about it than I do.

  That being said, there are things complicating treatment of the question:

  * Some low-level details of the neuron and synapse necessary for knowing 
the answer to the question are not currently known to medical science.

  * The capabilities of an information system is not necessarily equivalent 
to the sum of its parts, or even necessarily readily ascertainable by the
examination of its parts.

  * The terminology used to describe the capabilities of a computer system 
implies both, abiguous relationships between the informational complexity 
of that system and its theoretical capability, and between its theoretical
capability and expected capability.

  * What we think we do know about how neurons and synapses work is not 
readily comparable to how modern computer systems work, though it might be 
able to compare them strictly on the basis of informational complexity.

  * Not least, there are very few individuals authorative on both computer
science and neuroscience.  Thus we have to rely on information scientists 
who do not have professional understanding of neuroscience and on 
neuroscientists who do not present their findings in terms readily treated 
by the techniques of formal information science.


  It is *believed* that synapses convey data in binary format, ie they can 
fire or not fire, and the only information the neuron possesses about its 
neurons are that they are firing or not firing.  It is also believed that 
neurons function as threshold functions, firing sets of synapses when the 
sum of firing synapses from another set exceeds a particular level.  This 
is the simplest (in terms of information complexity) model of the neuron 
held to be true today, being all that has been demonstrated to be true.

  Possible complicating factors include: the pattern of neurotransmitter 
concentrations between synapses (which changes over time) and their impact 
on the firing of synapses, the use of functions within the neuron more 
complex than the threshold function, and non-binary use of the signal 
generated by firing synapses (ie, as wave forms; the level of energy 
released by the synapses peaks at the moment of discharge, and then slowly 
falls back to the nonfiring state.  This rate of change is different 
between synapses).  None of these factors have been demonstrated or ruled 
out.  Any of them would tremendously impact the informational complexity 
of the neuron.

  On the extreme end, Penrose (who is, in my opinion, completely off the 
wall) has hypothesized that individual neurons are quantum computers of 
enormous power.  In my own opinion, it is reasonable to construct a 
hypothetical "low end" model of the neuron as the simple threshold device 
which is influenced by gradual changes in neurotransmitter levels not 
directly affected by neural computation, and a hypothetical "high end" 
model of the neuron as a simple arithmetic and/or logical function, with 
each firing synapse contributing up to 4 bits of information apiece via 
waveform variation.

  Under the "low-end" model, we might represent the neuron as a function 
of I binary inputs and J outputs collected into sets associated with a 
threshold, fired if their thresholds are exceeded by the sum of the I 
binary inputs, where I + J = approx 1000 synapses.  Converting this to 
terminology applicable to computer systems is difficult, but we can call it 
an adder and a 500-entry 10-bit lookup table.

  Under the "high-end" model, we might represent the neuron as any of a 
number of relatively simple functions of any number of inputs comprising 
2600 bits of information and generating about 1300 bits of output, or 
equivalent to around 40 canonical (32x32->32-bit) RISC instructions, or a 
6,760,000-entry 1300-bit lookup table.

  The average rate at which any given neuron fires some of its synapses is 
60Hz, but the actual rate depends on surrounding activity.  Rates exceeding 
120Hz (?) (I think this is correct, but I'm working from memory -- might be 
higher) have been measured.

  Unfortunately finding the complexity of the system is not simply a matter 
of multiplying our guesstimates of individual neural complexity by the 
number of neurons in the brain.  On one hand, the geometry in which the 
neurons are connected and the position of the neuron in the brain probably 
adds semantic information value to the data it generates.  That might not 
make much sense -- the neuron doesn't "know" where it is in the brain -- but 
consider that in software, two linked lists of N integers each can be much 
more useful than one linked list of 2N integers, because the software which 
retrieves data from each of them "knows" to treat the data differently.  
Jacobson compression (used in CSLIP and PPP data protocols) uses similar 
"knowledge of context" to enable (in extreme cases, where checksums are 
discarded) one bit of transmitted information to have equivalent "meaning" 
to 320 bits (a complete TCP/IP header) of generated information.  On the 
other hand, the brain is a massively parallel computing system, and 
Amdahl's Law states that the computational power of a parallel system is 
limited by the rate at which mutually dependent operations occur -- in the
brain's case, about 120Hz.

  In my opinion, this last bit is the clincher.  Just how much does Amdahl's
Law limit the power of the human brain?  Computer Architects have learned a 
great deal about the importance of latency in determining the performance of
a computer system.  High latency is VERY VERY BAD for performance, and 
sharply limits the practical application of computers with extremely high 
aggregate computing capabilities, because every software solution to any 
interesting real-life problem has at least some dependence on successive 
mututally dependent operations during some part of their execution.

  On one hand, the human brain seems to mostly deal with problems which we 
have learned to deal with extremely well with massively parallel systems, 
but on the other hand we do not know whether there are critical aspects of
our thought process which are terribly bottlenecked on dependent operations,
which must be performed at the comparably slow rate of communication between
neurons (120Hz, 1KHz, or whatnot -- it is certainly much much slower than 
the 800MHz current computer technology (Alpha 21264a) is capable of).  
Bottlenecks must be taken into account before the practical performance of 
a computer system can be accurately evaluated, and we have no way of 
detecting or accounting for any such serial dependency bottlenecks in the 
human brain. 

  So we're left with a huge "it depends".  It depends on what you're trying 
to actually do with the information you have asked for -- are you trying to 
determine how well a hypothetical "reprogrammed" human brain would perform 
tasks currently performed by computer systems?  Are you trying to determine
how many computer systems it would take to simulate the workings of a human
brain?  Are you trying to determine how many computer systems it would take
to simulate the workings of a human mind?  The latter two are very, very
different.  Presumably the former would include the latter, but precludes 
using the computer systems for anything other than simulating threshold 
devices working in parallel, which would not take advantage of the computer 
system's lower latency characteristics, while the latter might use 
completely different low-level implementations (which make the best use of 
the hardware's lower latency) to generate the same high-level effect.  It 
might take ten million Alphas to match the power of the 20% of the brain 
that recognizes images, but only one Alpha to do with the recognized image 
what the 80% remaining brain does.  (These numbers are completely bogus, of 
course, chosen to emphasize the point.)  It depends on too many things that 
we know nothing about, and depends on the implementation details of devices 
which we do not even have the basic theory to begin constructing.

  Sorry for such a downbeat conclusion.  :-/ 

  -- TTK




More information about the Neur-sci mailing list