modelling consciousness (re: brain and mind)

FUNK STEVEN LESLIE 93funkst at wave.scar.utoronto.ca
Tue Apr 18 20:23:45 EST 1995


Hi,

	Sorry if this doesn't work, its my first time trying this
posting thing from this program.  I just finnished a seminar on
consciousness using Penrose's 'shadows of the mind' as a text.  I'm in
the process of writing an essay for the course and thought it might
contribute something to the discussion.  Please bear in mind that this
is a first draft written during finals.  Thanks.

Steve Funk
93funkst at wave.scar.utoronto.ca

PS: if anybody out there is looking for a grad student and finds this
interesting drop me a note.


	In the argument put forward by Penrose, a case is made for the
limitations of the  computational modelling of cognition and
consciousness.  The algorithmic approach is described as a
system which is based on the foundations of logic as they are
applied to Turing machines.  Logic is described as a system,
extracted from the platonic world, where a set of symbols are
manipulated in some formal way.  The goal is to produce a
consistent formal system which fully describes how the modelled
phenomenon works.  It is important to understand the precise
definition of logic, as this will become a central point in
Penrose's argument later on.  By formal, I mean a system which
follows an explicit set of rules, and never deviates from them. 
By consistent, I mean that a system could never contradict
itself.  So, P and ~P (NOT P) could never coexist.  The Turing
machine is a hypothetical mechanism that is able to apply the
rules of a formal system in such a way that any computable
function may be processed.

  

	For some decades the emphasis of modelling research has been to
develop a logical system which is able to describe human
behaviour.  This Turing machine of the mind would initially be
limited to simple, dedicated tasks, however eventually the goal
was to produce an artificially intelligent machine. 
Furthermore, this 'robot' would be entirely cognizant, aware,
and conscious.  The research did not prove to be successful. 
While it provided a fair amount of information about how humans
think, the intelligent machine never materialized.  Even  the
simplest, most dedicated tasks seemed unattainable.  For many
this was predicted by Godels Incompleteness Theorem, which
states that no formal system may be proven consistent.  It
seemed as though logic was fundamentally flawed.  But, Godels
theorem does not state that a logical system is inconsistent. 
It simply shows that even if the system is consistent, this can
never be proven.  So, while computationalists and logicians were
not proven wrong, they were still compomised.  It seems to be an
obvious problem for a system to be unable to prove that it is
abiding by its own fundamental rule of consistency.  



	So, the issue of consistency within logic is set aside.  The
issue of consistency within consciousness, seems to provide an
even greater problem.  If it is the goal of logic to produce a
consistent formal system, then it might be safe to say that
logic can only model consistent formal systems.  If the mind
were shown to be inconsistent, or informal, then logic would be
for the most part inapplicable.  While the issue of formalism
might be difficult to resolve, the issue of consistency is clear
cut.  One can easily demonstrate the inconsistency of the mind
by asking: "are all of your beliefs true?"  Most people will
respond no, recognizing that there is a difference between
objective truth and subjective truth, were objective truth takes
precedence.  If you then ask people: "what are beliefs?"  Most
will be forced to respond that a belief is something that is
held to be true.  So then the word 'belief' takes on two
contradictory definitions.  The mind is inconsistent.



	This leaves Penrose with an argument which is common to those
who call for some special component to consciousness.  If the
brain follows the consistent laws of biology, chemistry, and
eventually physics, then how can the mind be inconsistent.  We
have already stated that a consistent formal system cannot model
a phenomenon which is either inconsistent or informal.  So, how
can a consistent formal system produce inconsistent or informal
behaviour.  For Penrose there is only one way out, quantum
mechanics.  It is only in quantum mechanics that he believes we
can find the strange and mystical component needed to produce
consciousness.  But, this may not be necessary.



	There are two central problems with what Penrose is saying. 
They both have to do with the way the problem is described.  The
algorithmic approach, as defined by Penrose, requires a
cognitive level description of behaviour, described by a
'classical' logical system.  By cognitive level description, I
mean that the behaviour is defined in terms of higher level
functions.  So that when dealing with object recognition, the
actions of the individual neurons never come into play.  By a
classical logical system I mean one in which there is no
consideration of space or time.  This may seem like a strange
definition now, but it will become important later on.



	As a demonstration consider the real world, and a cognitive
level description of the things in it.  It is important to
remember that it is a fundamental goal of logic to produce a
system were P and ~P can not coexist.  The descriptions might
include among them:  "all people born at the same time are the
same age."  This would imply, quite correctly, that twins will
always be the same age.  However, if we take into consideration
special relativity, and in particular the twin paradox, then it
becomes apparent that this level of description causes problems.
 Special relativity requires that time slow down as one
approaches the speed of light.  While time appears to proceed
normally for every person regardless of their speed, there is a
relative difference.  So, if a person boarded a spaceship and
travelled near the speed of light for 1 year, when they returned
more than 1 year would have passed back on earth.  In fact,
depending on the speed, thousands of years could have passed. 
The human race could be extinct, and the sun burnt out.  So,
what happens to our cognitive level description of the world if
we send one twin on a short trip near the speed of light.  Lets
say that during a 1 day trip 2 days have passed on earth.  In
this case our model of the world would have to include the
statement: "not all people born at the same time are the same
age."  So, P and ~P have emerged.  And all of this has occurred
within the consistent laws of physics.  Quantum Mechanics was
not only not required, but  is specifically incompatible with
the theories of relativity that produce this behaviour.  Not
only is it not possible to prove logic's consistency, it is easy
to make logic inconsistent.  But, is this such a bad thing. 
Where does the flaw lie, in the cognitive level of description
or the global nature of logic.



	The answer is that there really is no flaw.  It is important to
understand that symbolic logic is just one of the formal systems
that exist in the platonic world.  It cannot be wrong or flawed
in that it performs exactly as its principles require.  Instead
it might be better to say that the problem is one of
inappropriate application.  It may be considered inappropriate
to describe consciousness at the cognitive level.  In this case
I raise the issue as not just being troublesome for logic, but
potentially for any descriptive framework.  It may very well be
that describing things at different levels provides the
modelling system with more, less, or different information.  One
possibility which seems to hold promise is the idea of emergent
phenomena.  But, if we believe that consciousness emerges at
some level of complexity, we must ask why.  This has lead to a
number of involved issues including panpsychism and the notion
of critical mass.  However, it does provide an explanation as to
why the level of description is so important.  After all, if you
discuss such a system solely in terms of higher level
behavioural performance, understanding the nature of the
underlying complexity becomes impossible.  Consider a Monet
painting.  Examining the individual brush strokes from
centimetres away tells very little about the image.  Examining
the entire image from meters away, tells little about the brush
strokes.  Now, in either case how would you describe the way in
which the image is created?  In one case the response might be:
"what image?" In another, the response might involve no
reference to brush strokes at all.  And yet there must be a
clear line of reasoning from one to the other.

	In discussion of  the global nature of logic we will be
examining a limitation that is for the most part specific to
logic.  It may apply to several components of the platonic
world, however, there may be other systems to which it does not
apply.  There are two examples of the global nature of logic and
the limitations that they entail.  First, there is the case of
the twin paradox.  Logic fails to account for this world, partly
because of the level of description.  However, this cognitive
level could be accommodated in a system which allows for a kind
of modular consistency.  The two statements P and ~P may not
coexist, at the same place and time, however they could coexist
in the greater world.  So that one might say that either a set
of twins are the same age or they are not, but that they may not
be at the same time the same and different ages.  Put another
way, each incidence of twins is assigned either P or ~P, but not
both simultaneously.  In this way a set of twins may obtain
consistency within its own world.  But, the greater world of all
sets of twins would be inconsistent.  The second example is
given by wittgenstein.  The issue is raised in relation to a
number series.  While the series: 1,2,3,4,5.. might be defined
as x=(x-1)+1, a number series such as: 2,4.. causes problems. 
This is because the series might be defined as x=(x-1)*2, or as
x=(x-1)+2.  In this example both descriptions of the series must
be entertained, yet the two definitions are contradictory.  If
the third number in the sequence is 8 the first definition is
appropriate, if the number is 6 then the second is.  But, until
the third number is provided a logically inconsistent notion of
the series must be maintained.  Local consistency allows global
inconsistency to exist. This could be formalized in an
alternative version of logic.  However, the current system of
logic is incapable of making the distinction.  The important
thing here is that everything has its own nature, and that this
nature has consequences.  In the case of logic, it is its nature
to deal solely with global consistency.  In the case of the real
world, it is the nature of the universe to have locality
(barring quantum mechanical voodoo).  It is this incompatibility
that poses problems for the notion of a formal descriptive
system.  In order to be successful, the modelling medium and the
phenomenon being modelled must share certain characteristics
that are a consequence of their individual qualities.



	The issues of level of description and form of description are
intimately intertwined.  It may be possible to apply symbolic
logic to a neural description of the mind.  However, this
method, known as connectionism, has its own limitations. 
Connectionism by its nature is a system that utilizes
prototypical neural activity to perform tasks.  The primary
difference between this scheme and that of logic is the ideas of
space and time.  These become important only in the simplest
way.  In fact they are defined in terms of strictly ordinal
relationships between the neural units.  With this exception
connectionism can be equated with more classical logical
systems, all be it inductive ones.  The units are taken to
represent symbols, and the neural activity known as activation
is representative of a truth value.  In addition there is a
connection weight which links on unit to another, this may be
considered a rating of the importance of one symbols truth value
to that of the other.  An initial pattern is put into the
collective system and the truth values of the individual symbols
interact in an evolutionary way until the system settles into a
stable state.  When this is done the resulting output is the
computational consequence of organization and topology of the
network, and the underlying rules that govern connectionism. 
These underlying rules may be considered to be another system
extracted from the platonic world.        



	The problems that are specific to connectionism relate to the
way in which the symbols are organized.  Because the units come
to represent symbols the system becomes dedicated.  The units
may not represent other symbols, and the symbols may not be
represented in other units.  This reduces the possibility of
developing a fluid system.  Psychologists often speak of moving
items from long term memory to short term memory, or keeping
items in short term memory.  Yet this would require a symbol to
operate indepedently, without being trapped in a single
location.  This kind of representation also require more of a
token style representation, which is to say that a  unit is
required for every possible object in the world.  This would
require an enormous number of units, making such a system highly
inefficient.  However, the advantage of such a system is that it
allows for the simultaneous existence of P and ~P.  This could
be done quite simply by giving the model a number of 'work
spaces', or by a number of other means.  So the addition of a
ordinal notion of space and time has increased the complexity of
the system and its ability to account for conscious behaviour. 
But, along with this advantage are disadvantages which raise
issues of their own.



	As an alternative, perhaps next step in evolution from the
connectionist approach I propose a system which carries the
underlying princples further.  Temporal computation, for lack of
a better name, changes the computational medium which seems to
cause problems for connectionist systems.  In conventional
connectionism, the underlying mechanics are handled by straight
forward math.  However, the nature of mathematics carries over
to the behaviour of connectionism.  As an example consider the
case were two units are connected to a third.  Connectionsim
uses simple summation to evaluate the incoming activation.  So,
inputs of 0 & 9, 1 & 8, 2 & 7, 3 & 6, and 4 & 5, all produce the
same net input of 9.  This is not the consequence of learning or
adaptation, but of mathematics itself.  In temporal computation
the medium becomes time.  The actual spiking rates of, say 4 & 5
are integrated to produce a unique output pattern that is
distinct from that produced by inputs of 3 & 6.  The individual
spikes are summed together, and then a decay rate is applied. 
This allows two spikes which are transmitted at relatively
similar times, to be integrated into a single spike.  This
summed value may then trigger a threshold, which would transmit
a different pattern for each different possible value. This
gains two things.  First, as already shown the assumptions of
mathematics no longer apply, allowing greater latitude in the
behaviour of the system.  Second, the symbol is now indpendent
of the unit.  Because the firing patterns have become unique and
meaningful, they have in essence become symbols.  These new
symbols may be transmitted from unit to unit so that a unit may
represent more than one symbol and a symbol may be instantiated
in more than one unit.  



	While I believe that this approach would provide better results
in attempting to describe consciousness, there is one important
thing to remember.  Each of the three representations desribed
here are drawn from the platonic world.  None is correct or
incorrect, it is merely a matter of finding one that is
appropriate.  However, I feel that progress is being made in
moving towards a paradigm with a greater descriptive power.





More information about the Neur-sci mailing list