cumulative probability analysis

Matt Jones jonesmat at
Wed Aug 18 12:44:34 EST 2004

BilZ0r <BilZ0r at> wrote in message news:<Xns95499AF2E278FBilZ0rhotmailcom at>...
> So in the paper I'm reading, they're doing intraceullar voltage-clamp 
> recording from a single CA1 pyramidal cell. The papers are looking at IPSCs 
> induced by serotonin, and they have this "cumulative probability analysis" 
> graph... well two of them. The both have "Cumulative Probability" on the Y 
> axis, and one has "amplitude" on the X, the other has "interevent 
> interval".
> I figure that amplitude refers to the abilitude of the IPSC, but what is 
> the cumulative probability and the interevent internal refering too? The 
> cumulative probability of there being an IPSC of that amblitude? and what 
> are the events, that the interevent internal, is an interval of?

To get the cumulative probability here's what you do:

1) Make a standard histogram of interevent intervals (IEI),
amplitudes, rise times or whatever. That is, for the parameter of
interest (say, IEI), create a number of bins spanning the range of
values you observed (say, from 0 to 1000ms in steps of 5 ms). Then for
each bin, count the number of events that had had that value. For
example, when looking at interevent intervals of a Poisson process,
one should get a histogram that decays exponentially. If looking at
amplitudes of mEPSCs at the neuromuscular junction, one would get an
amplitude histogram shaped like a Gaussian (but not usually at central
synapses, where the distribution is skewed). These are actually
"frequency histograms", since you are looking at the frequency of
observing particular events.

2) Divide each binned value by the sum of  all the values. This makes
the area of the whole thing equal 1. So now the height in each bin is
approximately the probability of observing that class of events. This
is now the "probability distribution".

3) To get the cumulative probability distribution, make a new
histogram using the same bin spacing, but now fill each bin with the
SUM of all the bin heights from 2) leading up to and including the
current bin. Now, the height of each new bin tells the probability of
observing an event less than or equal to the current value. This
distribution (obviously) starts at zero, curves upward approximately
sigmoidaly, and assymptotes toward 1 (i.e., after examining all
events, the probability is 1 that you will have observed events less
than or equal to the largest event).

The usefulness of the cumulative distribution is that it is 
1) smoother than the raw distribution (because the summation smooths
out fluctuations between bins like a running average). This also means
that you can -lace two similar CDFs on top of each other and its
easier to see whether they're different or not. This is hard to do
with the raw histograms cause they're usually all lumpy.

2) Easy to tell whether the parent distrubution was symmetric or

3) Certain parameters of interest can be read right off the graph. For
example, the point on the x-axis where the graph goes through 0.5 pn
the y-axis is the median of the parent distribution, the point where
it goes through 0.95 is the 95-th percentile, etc.



BilZ0r, you've been asking a lot of questions about basic
electrophysiology analysis methods lately. I recommend reading a
spectacular (but short and very clear) book by Bernard Katz called
"Nerve, Muscle and Synapse". This is hard to find but you might get a
used copy on Amazon. It goes through the methodology of experiments
and analysis of such things like Hodgkin-Huxley Equations, Quantal
Analysis and so forth. A great explanation of the fundamental
principles of electrical neuroscience, written by a genius and
founding father of the field. This book should be handed out to
neuroscience grad students on their first day of school (or
undergraduate humanites majors, for that matter. It's so well written,
they could probably learn a lot from it).


More information about the Neur-sci mailing list