jan at neuroinformatik.ruhr-uni-bochum.de (Jan Vorbrueggen) wrote:
:In article <4b536l$baq at eis.wfunet.wfu.edu> laubach at biogfx.neuro.wfu.edu (Mark
: When are you going to distinguish the stimuli? After looking at all the
: trials on average?
:No, potentially during/immediately after the stimulus.
: A cross-correlation is nothing more than a correlational measure between
: time-series and a correlational measure is not a predictor for
: trial-by-trial classification.
:Ah. You seem to be thinking of using the cross-correlation of one cell's
:activity during different trials - that's not what I meant. I was talking of
:the cross-correlation of two cells simultaneously active, which you (or
:another cell) can compute on-the-fly, as it were. This _does_ enable you (or
:that other cell) to distinguish the two stimuli as they are occuring.
It seems we have thinking about different things when using the phrase
Our group has been thinking about how to compute a correlation between
two or more neurons on a single trial. This is not an easy issue.
How does one establish, for a single instance, that two or more cells
fire together in a way that is simply not due to chance or an overall
correlation in the cell's firing rates? The classic cross-correlation
with shift-predicor, the JPSTH, and gravity do this by averaging
coincident counts over trials or stimulus presentations. Two cells
may fire at the same time but this may simply be due to the fact that
the cells respond at high rates around the same time. We really want
to know if the correlation reflects coincident discharge that is
unexpected given the cells' firing rates, suggesting some source of
common input to the cells or a serial dependence between
One thing we tried was to compute expected counts for two independent
Poisson processes with rates equivalent to the observed neurons.
Then we know how many spikes are expected by chance alone and we can
assess whether the number of coincident spikes observed was
significantly greater than that expected by chance. The problem with
this method called "coactivation analysis" in an early version of
Stranger, our analysis program, was that we _binned_ the spike counts
and thereby induced potential artifacts due to spikes at the edges of
Does anyone have a suggestion on how to assess coincident firing
without binning spike counts?
One approach would be to use average shifted histograms (see Scott's
book, "Multivariate Density Estimation", 1992, John Wiley). This
would overcome the problem with the choice of bin size and spikes at
the edge of the bins.
Another apporoach might be to work with continuous spike density
estimates obtained by applying, say, a Gaussian filter to the spike
In both cases we would still need to apply a method like
"coactivation" to the density measures. Note that this solution
requires that we know what to expect from the neuron
in terms of its spike rate. Single trials or stimulus presentations
will still have to be evaluated in terms of some history of spike
I have not thought about this problem lately, so I hope these ideas
are not too off the wall. In any case, the issue of detecting
meaningful coincidences in neural activity, especially in neural
ensembles, seems to be a critical issue to overcome if we are to
have any progress in understanding ensemble activity.
Let me know what you think.
With regards to my earlier comments on the use of wavelets, I hope the
stuff makes sense. I think this approach is certain to advance how we
think about how neurons and neural ensembles may encode information by
_variations_ in spike rate, i.e., _local fluctuations_. The method
does not address precise timing directly. My own attempts at
analyzing precise timing, with burst analysis (Legendy and Salcman's
method) has not produced any evidence for repeating patterns of
spikes, either in terms of the number of spikes or the duration
of a burst, as encoding information about the performance of an
operant task. I am still waiting for someone to tell me of any
evidence for this.