Consciousness ~=~ self-referentiality' (was Re: Consciousness, New Thinking About

Matt Jones jonesmat at physiology.wisc.edu
Thu Jun 6 11:34:18 EST 2002


"tony.jeffs" <tonyjeffs2 at REMOVEaol.com> wrote in message news:<H%wL8.56660$wd3.9371202 at news6-win.server.ntlworld.com>...
> "Matt Jones" <jonesmat at physiology.wisc.edu> wrote in message
> news:b86268d4.0206051202.47193f3 at posting.google.com...
> <snip>
> > To me consciousness seems more likely to be describable as a sort of
> > "information", for which there are many very precise definitions, or
> > "complexity", for which there are none as far as I know.
> >
> 
> Are there none and is there a niche in the market!  :->

Well, after I wrote that I realized that there are actually some very
precise definitions of "complexity", but in the long run, the most
precise and useful ones boil down to something very much like (if not
exactly) Shannon entropy.

One of the best ones is "Kolmogorov Complexity",  also known as
"algorithmic complexity" (good link: http://www.hutter1.de/kolmo.htm).
The idea here is thatt he complexity of something can be quantified by
examining the -length- of the shortest description of that thing. For
example, say you have two books, one of which is the Bible, and the
other one is exactly the same number of words, but every single one of
them is "Jehovah" (you know, like the Monty Python bit: "Jehovah,
Jehovah, Jehovah, Jehovah..."). Well, the Jehovah one isn't very
algorithmically complex because you could convey exactly the same
information with the following algorithm:

Step 1) Write down the word "Jehovah".
Step 2) Repeat Step 1 a gazillion times.

In contrast, the Bible itself would need a much much more complicated
algorithm to reproduce it. In fact, the most algorithmically complex
book of the same size would literally be one that contained the same
number of characters but where each character was chosen at random.
The only algorithm that could reproduce this would be to specify every
single character in sequence. Thus this algorithm would end up being
exactly the same size as the book itself.

The idea of being able to "compress" information using an algorithm
instead of specifying all the information itself gives a direct
measure of complexity.

BUT - this doesn't really get us any closer to consciousness for
exactly the same reason that Shannon doesn't. Indeed, the Shannon
entropy is exactly the degree of non-compressibility of a certain
piece of information, and Kolmogorov complexity therefore approaches
the Shannon entropy for very complex things. Under both schemes, the
most "complex" things are the most random things (which seems to be
going in the wrong direction to explain consciousness).

There's a cool twist to this though. Shannon proved a theorem that
basically says this:

- A perfectly efficient information coding scheme will produce signals
that appear entirely random.

Technically, he was really talking about the Fourier spectrum of the
signal being Gaussian white noise, but that means the same thing as
what I just said above.

So, there may still be something to the idea that consciousness is
related to algorithmic complexity, and therefore unpredictability. No
wonder it remains intractable to definition.

Hmmm.......  unpredictability.....

Free Will, anyone?




Cheers,

Matt's Homunculus
(I'mmm baaaacckk.....)




More information about the Neur-sci mailing list