# Afferent data rates

Wed Aug 14 11:58:49 EST 1991

```In article <HUNTER.91Aug14085152 at work.nlm.nih.gov>, hunter at work.nlm.nih.gov (Larry Hunter) writes:
>
>
>
> I am interested in finding an empirically justified estimate of the
> rate of information flow into the nervous system.  One way of
> estimating this might be to estimate the information content of the
> output of all sensory neurons.

Ok, let's suppose it were possible to record the output of all sensory
neurons at the same time.

The classical information theoretic approach to the problem would be,
e.g. to consider the output of all neurons taken together as a data
source and to compute the information entropy of that source.
The following is just a _simple_ example, disregarding noise, etc.

(Problem here already is that the process is very likely not stationary,
i.e. you can't take means, etc., because input fluctuates wildly; but let's
suppose it were possible)

You would record at instant t the output of all the neurons and map
it to a point in N-dimensional space: You'd have one axis per neuron:
The output x(i,t) of neuron i at time t would be the coordinate of that
point on the i-axis, the output x(j,t) the coordinate of the point on
the j-axis, etc.

Since we are assumedly dealing with spike trains, it would probably be
reasonable to compute the momentaneous spiking frequency as the output value
of the neuron;

Also, since you are discretizing/quantizing that output (We are assumendly using
analog/digital converters & a digital computer),
instead of taking points, you'd have to divide the space into little boxes,
and count for each box how often a set of values falls within that box;

Do that for many samples: crease the count for a box  every time your set of values fall into that box.

You will obtain a cloud in that N-space. a density function of the points of
that space.
Dividing the value at each box by the total number of samples you've
taken will give you a fraction between zero and one, which you can consider
as a probability; e.g. the probability at box x would be of obtaining a set of
values x in the future, if the process were stationary:

Now all you have to do is to multiply the probability at each box of
the space with its own logarithm, sum up the values obtained over the whole
space, and you'll obtain the negative of the information entropy of the source;
If you multiply instead by ld (Logarithm of basis 2), you'll get that entropy
in bits.

How do we get from that the information transfer rate?

Well, in my opinion, this is not quite trivial:
In information theory, you assume you can determine when one message/state ends,
and when the next begins, but here, I assume you can't:

You could take the integrating interval for the determination of your momentary
spiking frequency as the unit interval, and thus assume that one
message takes that long:

Then you'd have to divide the information entropy above
by that interval, and you should get a decent estimate of the information
transfer rate; to be on the safe side, better choose that interval longer, so that estimate will be on the conservative side.

This information measure takes already all correlations in the source into
account, in our case, all the correlations in the outputvalues of the
sensory neurons.

Computing the information entropy of each output separately and then just
sum them up won't be accurate, because the data obtained from different neurons
will be correlated.

However, you could get the maximal information rate such a system would be
able to transfer at all by _assuming_ the output of each neuron is independent of
the other neurons and assuming an equal probability of having each output value
within the possible range of output values:
This would maximize the information entropy of each neuron.

Of course, this says _nothing_ about the computational ability of such a system;
it says only how fast information could flow at all through such a system.

That information transfer rate would simply be:

I = total number of sensory neurons times
(logarithm to the basis 2) of number of different states the output of a neuron
can have divided by time interval.

Example:

1000 neurons, each neuron 1024 different states which will occur with equal
probability:

0.1 sec per message, resp. integration interval of spike train;

Information entropy:

I = 1000 * (ld2)1024 = 1000 * 10 = 10 000 bits

Information transfer rate = I/timeinterval = I/.1 = 100 000 bits/sec

Now this assumes that the output of the neuron are discrete, and noise free,
which both is not true; You'll have to take the maximal quantization rate you
assume the system _on the receiving end_, i.e. higher order neurons, can take.
The finer these neurons can discriminate between individual states, the higher
the information transfer rate; obviously, here, noise comes into play again.

If you know the probability distribution for the states of a single neuron,
you can sharpen your estimate by computing the information entropy of that
neuron using that distribution;

then again assuming that the data among neurons is uncorrelated, which is
definitely _not_ true, you can get a better. i.e. lower estimate of
maximal information transfer rate.

I would like you to take descriptions above with a grain of salt, since
my knowledge of information theory is limited; but in my opinion, the
statements hold.

Konrad Weigl               Tel. (France) 93 65 78 63
Projet Pastis              Fax  (France) 93 65 78 58
INRIA-Sophia Antipolis     email Weigl at sophia.inria.fr
2004 Route des Lucioles
B.P. 109
06561 Valbonne Cedex
France

```