Definitions for the following terms?

Matt Jones jonesmat at physiology.wisc.edu
Tue Jul 10 19:57:48 EST 2001


"Isidore" <isidore at mailandnews.com> wrote in message news:<PoJ27.2$ln3.148 at typhoon.nyu.edu>...
> Hi everyone,
> 
>     I'm a high school student trying to read a neuroscience paper and
> understand it well. There are some keywords listed at the top of the page
> that I'm not exactly clear on. They aren't explicitly alluded to in my
> textbook (although perhaps they are by another name).  If someone could help
> clarify these for me, I'd appreciate it.
> 
> renewal process: Is this just referring to the process the neuron has to go
> through before it fires another action potential (absolute refractory
> period, relative refractory period, etc.?)
> 
> integrate-and-fire: Is this just referring to the neuron firing when
> threshold is reached? Why do they call it INTEGRATE-and-fire? Is there any
> alternative to integrate-and-fire? What is a leaky integrate-and-fire
> neuron?
> 
> interval distribution: Is this just the lapses between the action
> potentials?
> 
>         Thanks for your time,
> 
>             Isidore






Howdy,

As usual, Richard Norman has given a very good response already (Hi
Richard!). But I've been studying up on spiketrain analysis alot
lately, and thought I might give another response, since its still
fresh in my mind.


(I'm a bit shaky on this first one, but here's what I think the answer
is:)

A renewal process is a kind of "counting" process. Counting process
means that the signal generated by the process comes in discrete
units, like ticks of a clock, rather than a continuous flow, like
water flowing from a tap. So a counting process is characterized by
things like the frequency of ticks (or spikes, in this case) or the
number of ticks in a certain timeframe. A renewal process produces
ticks or spikes in such a way that there is no correlation between the
time of one spike and any other. That is, observing a spike at any
particular time doesn't convey any information about when the next
spike will occur, or about when the preceeding spike occurred. In more
technical language, a renewal process has a flat autocorrelation
function.


Integrate-and-fire is a possible model for how neurons examine their
synaptic inputs and then decide when to fire a spike. In this model,
they literally integrate (i.e., as in integration, from calculus)
their inputs over time. Integration is pretty much the same thing as
just keeping a running sum. So suppose the inputs (e.g., synaptic
potentials) came in the following order with the following sizes:

2, 3, 2, 5, 1 ...

then an integrate-and-fire neuron would integrate these as follows:

2, 5, 7, 12, 13 .... (i.e., add each number to the previous sum)

If the spike firing threshold was set at a value of, say, 10, then the
neuron would have fired a spike between the 3rd and 4th inputs because
the value 12 is above threshold. A critical feature of an
integrate-and-fire neuron  is that it -resets- itself to zero every
time it fires a spike, and starts the sum all over again.

A leaky integrate-and-fire neuron is an integrate-and-fire neuron that
has trouble remembering how high it has counted, but in a very
particular way. It doesn't just forget, it -subtracts- a certain
amount from the count at each new moment in time. A neuron like this
will need to receive several inputs close together in time in order to
reach threshold at all. The time it takes for the count to decay to
about 1/3 of its value is called the "integration time constant", and
tells you about how close together in time the inputs have to be in
order to add up to a value that will reach threshold. A really leaky
integrator will require multiple almost simultaneous inputs to reach
threshold, and can therefore be considered a "coincidence detector".


By the way, in my opinion, real neurons are not integrate-and-fire
devices, leaky or otherwise.

Finally, an interval distribution is -not- the time between spikes
(that's the interspike interval), but is the -distribution- of
interspike intervals.  In this case, "distribution" means essentially
a histogram of the probability of seeing a certain interval. To make
one, you create a list of all the interspike intervals, sort them by
duration, and group them together into "bins" of a certain size (i.e.,
if the binwidth is 10 milliseconds, then any intervals between 0-10 ms
would fall in the first bin, between 11-20 ms in the second bin, etc).
Then you count how many intervals fell into each bin, and make a graph
of the count versus the time at the middle of each bin (5, 15, 25 ms,
etc). Often, you divide the count in each bin by the total number of
counts before making the graph. This converts the counts into
probabilities.

The interspike interval (ISI) distribution is a summary of the
spiketrain that tells its mean rate, and the variability or spread
around that mean, as well as giving an indication of whether spikes
occur in bursts or clusters, etc.

People often make a big deal about Poisson spike processes, which have
an exponential interspike interval distribution. This is because the
Poisson process is what you would expect to see if neurons encoded
information solely through the mean rate of spiking, where each spike
occurs at a random time (imagine rolling dice at each moment to decide
whether to fire a spike or not - that would produce a Poisson
process). This is a nice simple way of thinking about neurons because
it doesn't call for understanding anything about them except their
spike rate. More complicated schemes would use encoding by exact spike
times, and it would probably (no, definitely) be a lot more difficult
to figure out. So people often draw exponential curves over their ISI
distributions and conclude that they are "Poisson-like", I think
because it makes trying to understand the brain seem like it might
actually be possible instead of hopelessly complicated.


There are some problems with this: 1) If you look at most of them, the
exponential curves don't really fit the ISI distributions very well.
2) They -shouldn't- fit very well, because we know neurons do have
refractory periods (which means they can't generate enough brief ISIs
to really be Poisson). 3) Regardless of how well they fit, any highly
efficient code, no matter how complex and precise, will always tend
toward a Poisson ISI (there is a mathematical proof of that statement
by the late, great Claude Shannon, published in 1949).

So both noisy rate codes and excruciatingly complex and precise timing
codes -both- predict a Poisson ISI distribution, which makes worrying
about whether ISIs are exponential or not seem pretty uninformative as
far as figuring out how neurons encode information.

Out of curiousity, what paper are you reading?

Cheers,

Matt




More information about the Neur-sci mailing list