What the Neocortex Does

Oliver Sparrow ohgs at chatham.demon.co.uk
Mon Aug 7 04:27:04 EST 2000

"Kevin K." <KK at _._> wrote:

> I don't see anything particularly complicated
> about the behavior of bats (or most
>people, for that matter!). 

Breathtaking. You cannot be serious. (?)

>Church's thesis is very strong and time-tested, and your comment about
>it is what led me to post. To disprove it, you must produce a machine
>which outperforms a TM like a TM outperforms a finite automata or a
>push-down stack. 

How would you tell if you had one such? There are two possible answers, I
think. First, you could judge that it was more versatile in some way as a
result of what it did or what you could do with it. Second, you could make
this judgement 'hard' by assessing the degrees of freedom implicit in the
repertoire of behaviours which the system could produce. Step functions in
these point to step functions to be explained; and 'telling' - as with the
first sentence in this para - equates to 'explanation'. I can tell that I
have something transcendent upon the CT model if the explanation that i
need of its properties is itself transcendent. TMs transcend because they
have the capacity to respond in contingent ways, whereas stacks do not. Xs
transcend TMs because they have the capacity to do Y, which TMs do not. We
find Y through our model of what the system is doing.

Very well. So what is the "Y" of the brain that proves it non-computable?
This is, of course, something of a (rather literal, in some cases) Holy
Grail. My own view, for what it is worth, is as follows; and is in no way
restricted to brains. 

Point A: much of the real world is uncomputable, for reasons connected to
access to information, to combinatorial problems and other practical
issues. This may sound trivial in the context of intellectual debate, but
note that general relativity is also a practical issue: whatever thought
experiments one can bring forward about how it ought to be if information
could be transmitted instantly, everywhere, it is a fundamental feature of
the universe that it cannot. Much the same is true of gauge invariance. 

Point B: that to be computable, you have to have a model to compute. There
are many instances in basic physical systems in which the model that
governs the components is invalidated when the components interact.
Symmetry breaking is a local instance of this. Complex structures may well
be made of components which are embedded in several intersecting systems,
each with a separate emergent model. Any one of these may be computable,
once understood, but scrutiny of the component parts will not yield this
understanding. Where such systems themselves become components of yet other
systems, and where the 'other systems' contribute to the structure that you
are trying to understand, then the resulting non-linearity complicates
understanding. Further, the terms of reference in which computability is to
be expressed - simulate that system - has to be couched in terms which
refer to the system in question. The computability is referential, and not
ex ante to the structure. 

This is, perhaps, the "Y" to which I referred above. A TM differs from a
set of look up tables chiefly because it is capable of handling
contingency. That is, it can branch. Look up tables can deliver
instructions, but the outcome is invariant and one way: inside to out. A TM
can take in data and respond in ways which are still invariant, given
identical circumstances in its operating environment, but the flow is now
two-way(outside-to-in, inside-to-out)and a wholly new repertoire is so

A "Y" system is different from a TM because it is capable of changing the
ground rules - the equivalent of the logical instruction set - on which it
operates. That is, it is embedded in a loop with its operating environment,
just as is a TM - but the way in which it handles this loop alters the
operating instructions themselves. 

This is equivalent to asserting that the instruction sets within "Y"
systems are not identical with boolean relationships, and are an incomplete
- perhaps infinite - set. How can this be? Answer: the instruction sets are
the models that are in play. There can be a very large number of models,
each dependent on other models, in play in a system; and these can be
combined not merely factorialy, but hierarchically, dynamically,

Any model can perhaps be reduced to a maze of boolean instructions, but in
so doing, something is lost. By analogy, there are no 'boolean
instructions' in the real world: they are always instantiated on something
- patterns of charge, marks on a page, less accessibly, states in a human
mind. They do not exist of themselves. Yet they are meaningful if the CT
hypothesis is to be meaningful. 

What is the "something" that is lost? It has to be lost to the observer,
since it is impossible to exclude a contributor from a system without
changing the system itself. So what does the observer lose by taking an
atomist, rather than a holistic view?  Answer: access to explanatory
models. If all I see is gas molecules as individual instances, then it is
difficult to construe the concept of phase change, thermodynamics or
turbulent from only the minimal model. I need bulk statistics - deriving
'temperature' as a variable that affects only the ensemble, for example. A
CT observer, wishing to 'compute' individual molecules, would not have
'temperature' as a variable, and would build a contingent model of the
molecule's response to its immediate environment. The model molecule would
do the mathematical equivalent of bouncing around, based on the
CT-observer's model of it.

If a myriad of these were adequately computed, the result would - amazingly
- mirror thermodynamics. This is hailed as a vindication for the CT
approach. However, we delude ourselves. A true CT observer would be
shattered, for their model would have failed. Things would be happening
that *They Had Not Written Into The Model*. A new tranche of observations,
of the molecules in a group, would have to be developed into a model that
would then be found to fit the behaviour of the simulation. The boolean
rules would have been overwritten - by implication, in action, but not in
the s/ware, obviously - by systems that came into existence as a result of
their operations. Even in minimalist CT structures, therefore, what is
written down and what happens vary depending on the interaction of the
simulated parts. Thus "Y", and thus minds. An attentive observer could,
perhaps, explain why a given molecule was being moved around in
reductionist terms once they had formed a perfect model of what was
happening in a complex system. Where this depends upon an operating system
itself made of these very same systems, however, as in a mind, then this
perfect model would be permanently open to subversion and could never be
known to be complete or full, save after the event. The means to render it
into CT terms would exist only once the outcome was proven. It could never
be anticipated.

This result does not say that minds are not emulatable in machines: indeed,
minds are supported on machines, called brains. The trick may well be to
set up predispositions to interact, specialised structures that do
particular mission-critical things, and then to let the structure write and
re-write its own rules. You cannot "write" a mind, or if you can, you
cannot expect it to stay written that way. You can write the seed and the
soil, the weather and the seasons; and then let the ecology take its own

Oliver Sparrow

More information about the Neur-sci mailing list