kpaulc at earthlink.net
Sat Mar 6 19:58:34 EST 2004
"NMF" <nm_fournier at ns.sympatico.ca> wrote in message
news:11W1c.18466$qA2.1189023 at news20.bellglobal.com...
> > When you use only one set, there's not
> > enough information in-there to triangulate.
> > In triangulation, you have to have inform-
> > ation that is "three"-to-"one".
> > > What you've described is "one"-to-"one",
> > with some guessing with respect to the
> > 'fog' of unknown-summations.
> > When you use "three"-to-"one", each of the
> > three sets =simultaneously= records a 'picture'
> > of the one thing. So each of the three sets
> > independently contains 'differentials' with re-
> > spect to the one thing.
> Although you bring up interesting features. I have a problem with what
> are saying here. In this case you run into the possibility of mapping
> numbers that are assigned to what is, fundamentally, a single level of a
> factor. The problems with this approach is of course obvious. One
> problem would be the possibility of comparisons between various
> factors and several homogeneous factors. Which would be completely
> inappropriate and sure to be used inadvertently in error. Generally this
> scheme would not achieve the desired level of mutually exclusive
> In terms of the descriptive and mapping power that you would be required
> have. A priori knowledge regarding all possible correlated activity
> associated with a specific neuronal response and informational processing
> must be available. In other words, you require a complete picture
> how the activity in firing dynamics of a neuron is associated with aspects
> of informational processing and informational propagation. A complete
> knowledge of every aspect of neuronal signalling and how this transmit to
> information processing is required. I don't believe you have this
> information, simply because nobody has the answers to this.
> you haven't provided a solution to anything. Therefore, even triangulation
> would still not be sufficient.
> As an aside point, even if you were to use a "three" to "one" mapping,
> there is no good reason for why this would give a better "picture" than a
> four to one or even a ten to one, or even a thousand to one map. The
> is essentially mute because presumably one would be required to first do
> mapping regarding a every single neuron and then a map regarding how that
> neurons activity at each possible moment can be embedded within a context
> a network of neurons, whose activity at each possible moment can be
> in a completely nonlinear fashion .
> Another aside point. The measurement features that you are suggesting,
> using differential EEG, still runs into the same problem that any
> electroencephalographic approach would fall to. You still run into the
> inverse problem regarding the localization of the signal. Even the new
> statistical approaches still can't adequately solve this problem with
> complete absolute certainity, hence the reason why any estimate lies
> a margin of statistical probability and uncertainity.
With the caveat that I've no hands-on experience,
I stand on what I've posted, and disagree with
all of your criticisms, above.
I think you've not understood what I discussed.
The goal of what I discussed is just to see the
EEG data in 3-D.
Toward that end, having 3 sets of simultaneously-
recorded data completely map the Problem [to
the degree that the electrodes employed can, in
fact 'see' to the depths of the brain.
I wasn't trying to say that all of the billions of the
brain's neurons would be isolatable, what I was
discussing was a way to =begin= working toward
And it's flat-out doable for large-diameter neurons
be-cause their action potentials are like burning fuses -
they have stereotypically-mapped occurrences, and,
with the three sets of data, the unfoldings of those
stereotypically-mapped dynamics are represented
in the data in stereotypically-graded ways.
With respect to a large-diameter, long fiber, 'all' of
the data corresponding to other neurons can be
filtered-out be-cause the data pertaining to other
neurons will not exhibit the 'differentials' that are
graded, 'continuously' [with 'jumps' between nodes
of Ranvier], in three different ways, across across
the three datasets.
Do this filtering.
Learn from the doing.
Reiterate at increasing depth.
To the degree that electrodes can see at-depth, even
activation pertaining to small neurons will be represented
'differentially' in each of the three datasets, allowing
their dynamics to be 'ranged' within the brain's 3-space,
and, therefore, located within that 3-space.
In principle, one can carry it out to the degree of
one's choice - because everything is in-there in a way
that all activation is represented in three differently-
Within the resolving-power of the electrodes used,
the combination of the three gradients [three sets of
'differentials'] maps everything uniquely.
Yes, at any 'point' there will be many 'equipotentials',
but, as one moves away from that 'point' everything
is differentiable because of the gradients.
And the analysis is not 'intractible' because one doesn't
have to 'hunt' through all the data. One can just sort
the data for each 'point' that one wants to analyze,
and doing so eliminates most of the data, because the
'differentials' clearly don't fit.
You know? I don't see any 'show-stoppers' - except
that currently-available EEG data is probably not ad-
equately orthoganalized with respect to producing the
I'm just working the problem in the ol' noggin lab [have
no access to actual EEG data], but I've used the same
methods in resolving all manner of Problems, and it's
always produced strong results, no matter what the
I don't think you understand it yet.
k. p. collins
More information about the Neur-sci