kpaulc at earthlink.net
Tue Mar 9 13:44:47 EST 2004
"NMF" <nm_fournier at ns.sympatico.ca> wrote in message
news:qPc3c.8897$hG.209944 at news20.bellglobal.com...
> I agree with you. I don't understand what you are saying simply because
> everything you have suggested is already currently employed in
> electroencephalographic approaches already. I don't think what you have
> suggested is anything different with respect to what has already been done
> by researchers. There has been considerable evidence from studies that
> directly assessed the timing and topography of activation of neural
> during specific tasks. This has been routinely done and is basically
> place in the methodology. For one example, this aspect of research is
> highlighted especially in the areas of grapheme-to-phoneme research, where
> combinatorial approaches with fMRI, ERPs, MEG and BEAM have been and are
> routinely performed.
Ho, ho, ho :-]
I'll wait until folks allow me to discuss stuff
with them in-person, before I decide how
'similar' the approach I've been discussing to
approaches that are in-place.
Until then, why do you think it is that, =if= it is
as you say,folks've been 'bagging-it' with respect
to their Responsibilities to get all of this into the
I've not been reading in the Neuroscience
stacks, but I've not seen a single Generalized
Report of efforts to use EEG to follow the
'burning fuses' of action potentials.
If it's as you say, then you should contact
a good Science Writer [say, Natalie Angier,
William Broad, John Noble Wilford, Kenneth
Chaing, at The =New York Times= Science
desk], and get this stuff into the Popular Med-
ia. [The =NYT= also employs some pretty-Good
It's too-important to sit in the stacks, so I en-
courage you to get it Communicated.
> Secondly, I think you are overestimating the spatial resolution and
> robustness that one can ascertain from electrode recordings.
Nope. It's why I emphasized the electrode
Egineering Problem. Folks need to hook-up
with accelerator-detector Engineers. That's
> Regardless of
> what anyone may say, the EEG is still a relatively crude instrument.
Folks need to hook-up with accelerator-
detector Engineers. That's all.
> conduction and inverse problems will still present a problem in the
> localization approach you suggested.
Only be-cause folks've not, yet, hooked-
up with accelerator-detector Engineers.
> However, the mathematical techniques
> often employed are quite useful in ameliorating a good percentage of this
> limitation. Otherwise, the actual EEG recording and approach would not be
> worth the paper it is written on! The EEG does not measure the direct
> activity occurring deep within the cerebrum, such as within the
> or subcortical telencephalon, unless this activity accesses the cerebral
It's why I emphasized the electrode-Engineering
> So even having a considerable montage of electrodes will still not
> allow you to directly assess subcortical electrical activity and their
> contribution to scalp electrodes (assuming that is what you have
> In other words, sticking a bunch of electrodes will not increase the the
> depth extent of the recording. It is still limitated by the skull, layers
> the dura and pia mater, layers of the neocortex, as well as by volumetric
> consideration regarding the makeup of the tissue.
If other scanning techniques can see into
the depths, EEG can, too, and it will, as
soon as folks hook-up with accelerator-
> Thirdly, you cannot discriminate the contribution of large diameter or
> medium diameter fibers from scalp or intradural recordings. What you are
> recording at the actual sensor are the reflective summated potentials of
> hundreds of thousands of neurons. There is no way to differentiate, for
> example, the contribution of specific small clusters of neurons using
> electrode or even depth electrodes in that fact. The spatial resolution
> not sensitive enough.
I do not 'overlook' the fact that there're a hundred
billion neurons in-there.
It's just that every one of them is directionally-
mapped within the neural Topology.
If I ever get a chance to work with actual data,
I Expect that what I'll find is that folks've been
'ignoring' this neural-topological-mapping - be-
cause their approach is Conservative with re-
spect to the Maths that's been handed-down
since Berger. I Expect I'll find that folks've
been Preserving the Maths, rather than doing-
> Your suggestion of 3 sets of simultaneously recorded data is unclear. The
> conventional EEG montage for recordings typically use 10-20 sets of
> recording electrodes anyways. And often this number will increase in ERP
> studies, in order to maximize spatial and temporal resolution. One
> solution might be to (although it will be difficulty and costly) use the
> typical montage of surface recording electrodes, but also include
> subcortical mapping using telemetric electrodes that are set upp in a
> montage similar to the surface. (Obviously this approach has and would
> numerous flaws)
It's mostly an Engineering Problem.
[I wrote about it in the 'disappearing' msg
I posted to DD, but when that lengthy dis-
cussion 'disappeared', I was 'distracted'
from rewriting it.]
For instance, it would be useful if electrodes
were embedded in 'patch'-arrays that were
addressible in the 'same' way that current
LCDs are addressed. This, alone [which is
'immediately'-doable] would =greatly= enhance
Such 'patch-arrays' could be used in localized
EEG recordings - to look, in focussed ways, at
particular sub-dynamics, with each patch having
its own CPU so that analysis could occur in-paral-
lel, in real-'time'.
Then the patch-array-density could be gradually
increased to 'full'-coverage.
I don't know what, if anything, it'd yield, but I'd
also like to investigate using electrodes that're
suspended at distances separating them from
the brain. I'd want to look for another usable
energy-gradient - another 'ramp' - another 'dif-
ferential'. Superficially, it'd be even more 'foggy',
but that's just-it - a relative-'fog' 'differential' is
Useful in sorting-out what's 'foggy', but less-
Today's resolution Problem maps the route to
> My point regarding the many-to-one mapping still remains a difficulty in
> this approach, as anyone who actually does this research will constantly
> state. The same problem emerges regarding the specificity of
> deoxyhemoglobin-based fMRI methods for cognitive and clincial studies. In
> differential mapping, although the approach is to obtain the usefulness of
> using two or more analogous but orthogonal activation states designed
> specifically to suppress common signals; the functional activity generated
> still corresponds to the subtraction of these conditions. By contrast,
> single-condition mapping will not rely on a subtraction from two or more
> orthogonal activated states. Moreover, this approach is significantly
> demanding on the spatial accuracy than differential signals. The problem
> with the multi-mapping approach is that it does require some element of a
> priori knowledge of analogous but orthogonal activation conditions, often
> knowledge which is not always feasible to have.
> I appreciate this comment and your insight....
> > I'm just working the problem in the ol' noggin lab [have
> > no access to actual EEG data], but I've used the same
> > methods in resolving all manner of Problems, and it's
> > always produced strong results, no matter what the
> > problem.
It won't be Settled until I'm allowed to meet
with folks in-person - until I can 'play' with
Cheers, Neil, ken [k. p. collins]
More information about the Neur-sci