wnelson at eng.sun.com (Will Nelson) writes:
>> One of the fascinating features of the human brain is that there
> is no apparent maximum amount of information that can be stored.
> People learn throughout their entire lifetimes, and that learning
> keeps getting stored in some fashion. There are retrieval problems,
> but it's all in there.
In a neural network, information is stored in a distributed fashion --
that is, each item is distributed across a large number of widely
scattered storage sites. One of the interesting properties of this
form of storage is that there is no hard upper limit to the number of
items that can be stored. Rather than abruptly hitting a threshold
where it is impossible to store one more item, what happens is that
each additional item causes a little bit of "blurring" in the memory
traces of all the previous items. Eventually the net loss of inform-
ation due to blurring becomes as large as the gain of information due
to storage of an additional item, and at this point the information
capacity of the network is max'ed out. Thus the information capacity
of the network is finite even though there is no limit to the number
of items that can be stored in it.
For some simple neural network models, it is possible to calculate
analytically the information capacity and the number of items that can
be stored before blurring takes over. For binary synapses (which have
only two possible states, weak and strong), the situation is
particularly simple: the information is maximal when exactly half of
the synapses are strong.
A couple of relevant references:
D. J. Amit, "Modelling Brain Function", Cambridge University Press,
New York, 1989.
A. Treves, E. T. Rolls, "What determines the capacity of
autoassociative memories in the brain?", Network 2:371-397, 1991.
-- Bill Skaggs