I've been trying to find a frequency-domain method of EEG artefact
correction that doesn't wipe out evoked potentials. So far I've been
failing at this task, and I'd like to hear from anyone who has suggestions
(or even comiseration) to offer.
Everything I've attempted so far has been one or another variation on
complex regression. This method gives reasonable coefficients (gains
comparable to the time-domain transfer coefficients given by Overton &
Shagass and by various others over the years, and negligible phase
shifts), but wipes out the evoked potentials as well as the eye artefacts.
If I shorten the Fourier epoch, the corrected EPs become less and less
attenuated - this makes sense, because the transform of a shorter epoch
contains fewer frequencies, so shortening the epoch moves the computation
closer to the limiting case of time-domain regression (Fourier epoch
length = 1). But if there's no solution other than this, I may as well
simply resort to time-domain regression.
In order to minimise the effect of EEG leakage into the EOG, I'm including
in the regression computation only those epochs during which the dynamic
range of the EOG exceeds 100uV. I have also tried a few ideas for
explicitly estimating and correcting for the contamination of EOG by EEG.
To do this, first I tried a simple Wiener filter, multiplying the
previously determined correction coefficient by one minus the ratio of
power during non-blink epochs to power during blink epochs. That didn't
work; the resulting averaged EPs were still very flat. Next, I
implemented the algorithm described by Gasser, Sroka, & Mocks (EEG & Clin
Neurophys 61:2:181-193 (1985)) which attempts to correct for contamination
by subtracting a non-blink cross-periodogram from the blink
cross-periodogram for each electrode. The average EPs resulting from my
application of this algorithm are so noisy as to be indecipherable - there
are some oscillations but I can't pick out any landmarks, not even a P3.
I very much doubt that there's any problem with spectral leakage; I've
tried convolving the signal with all sorts of tapered windows and it
Has anyone else out there confronted these problems, successfully or not?
Should I give up and settle for time-domain regression?
Please email replies to me (mkb4 at Cornell.edu); I'll gladly provide a
summary of responses to those who request it. If you follow-up to
sci.med.informatics or sci.med.physics, please send me a copy also, since
I don't read those groups regularly.
Thanks in advance.