On 9 Jan 2002 11:26:45 -0800, GuilfoyleMR at cardiff.ac.uk (Mathew
>This is the whole crux of my argument. In your theory to what or whom
>does the brain represent this knowledge? There are three
>possibilities - representation to a finite succession of brain
>regions, in which case my essential argument is unchanged, just the
>details. Second, you can have an infinite regress of representation,
>which you either gets you nowhere or is the same as my argument
>depending on which way you look at it. Lastly you can posit a
>non-physical homunculus or soul who can 'view' the goings-on in the
>visual cortex and infer knowledge. Your argument falls in to the
>'Cartesian theatre' trap and even if you don't realise, is a form of
Okay, we have some confusion of terminology here.
When I used REPRESENT, I used to mean how information from the retina
is transmitted, sorted and coded in the primary visual cortex. Thus,
the visual space is 'represented' in the brain, and is not being
'presented' to anybody, but merely organised systematically for
further processing (which is the next step QUALIA)
When I use KNOWLEDGE or QUALIA, I refer to how this represented &
sorted information is then processed in such a way that it becomes
part of a sensation of existence.
I totally agree with you that qualia does not need to be 'presented'
in a Cartesian theatre to the 'mind', but I do think that qualia still
exists as a phenomeneon when you discard Cartesian dualism.
IMHO qualia or "the sensation of being" outside the context of
Cartesian dualism describes how the brain is able to process the
primary information represented in the sensory systems to be made
aware of it (rather than just act and react on it). The phenomeneon of
epileptic automatons just shows you what the brain would do without
Cartesian-style dualism of mind and brain may not exist, but I do feel
there are broadly distinct brain systems performing different
computational tasks. The two which pertain to this discussion of
qualia are: (1) primary representation & sorting of information, (2)
transforming this information into a motivation and consciousness.
I recommend reading Damasio's "The Feeling of What Happens", if you
have not already done so. Damasio describes step (2) as the
computational synthesis of:
(I) primary sensory information from the sensory, visual & auditory
cortex with the autonomic sensors in the brainstem & hypothalamus.
(II) the 3D map space in the somatosensory, secondary sensory,
parietal cortices & colliculi.
(III) the arousal system of the reticular complex and midbrain.
(IV) other systems which I have not actually fully understood yet,
like the insula lobe and the thalamus.
Naturally, being Damasio, he includes his pet theory on emotions there
under (I). [Disclaimer: I am aware that I am being very general here
and sounding like a phrenologist heretic but if you want, ignore me
ascribing cognitive systems to anatomical locations]
>What I am trying to say is that it is not necessary to have this
>representation step if you consider yourself (i.e. you in the
>strongest sense) to be nothing but the dynamics of your brain. If
>'you' are coded in there, then the infromation the visual cortex can
>discern will influence you via appropriately evolved mechanisms. That
>is, the visual information will influence decisions about actions in
>the world and further, more complex, activities. It never needs to be
>represented because there is nothing to represent to, because 'you'
>and 'I' are nothing but the intricate machinings of our respective
Of course, we are but that does not mean that the sorted represented
information does not have to be computationally changed into a
sensation of being. What you propose is a convenient logical
sleight-of-hand: by saying that the sensation of self is nothing but
the intricate workings of the brain (TRUE), you conclude that the
sensation of self needs no explaining (FALSE). The sensation of self
is real (well, it is to me at least) so whether a singular self is
real or not is irrelevant since how the sensation arises still has to
What I am proposing is that the understanding of the precise mechanics
of step (2), as I mentioned above is what qualia is all about and is
shows limited advances (if any) in existing cognitive neuroscience and
therefore philosophers are right to point out that existing
neuroscience is not actually solving the problem of qualia. That of
course does not mean the problem of qualia is not soluble, but simply
means that the approach we are using now is not producing much
results. IMHO the strategy which might help in moving towards that
goal would be with computational science at a systems level (whatever
that means :P) and network theory.