Does Inteligence require conciousness?

simonh_hibbs at simonh_hibbs at
Fri Nov 3 05:43:19 EST 2000

In article <8tu0vv$tb3$1 at>,
  mejqb at wrote:
> In article <39FF3E8F.F3011DF3 at>,
>   Raphael Jolly <raphael.jolly at> wrote:
> > mejqb at wrote:
> > > [...]
> > > That may well be, but such ad hominem poisoning of the well
> > > is not a good tool for rational thought.  For instance, Murray
> > > wrote in this thread:
> > >
> > >  IMHO intelligence does not so much *require* consciousness
> > >  as inescapably give rise to, or *cause* consciousness as
> > >  a natural by-product of knowledge about self and the world:
> >
> > I think this is the great mistake of the present a.i. paradigm. I
> think
> > consciousness has more to do with matter itself than with it's
> > organization.
> All the evidence is counter to your belief.
> > In my view *intelligence* is a by-product of
> > consciousness, it is not required to it.
> Human-level intelligence probably requires consciousness,
> contra Chalmers' zombie notion -- the work of Bernard Baars
> and others shows the role that consciousness plays, or may play,
> in intelligence.  But the human model isn't the only possible
> model, and non-conscious intelligence seems quite possible.

I think I agree. Some forms of inteligent behaviour probably require
conciousness, while others might not. Even among humans there are
people who are clearly highly inteligent, but only within certain
domains of intelectual pursuit, outside which they are incompetent.
Given the possibility of other architectures for inteligence beyond
the human, one would expect to see even greater variety in

> > Intelligence would be some
> kind
> > of entropy decrease in complex systems,
> That's mumbo-jumbo -- intelligence is effective use of
> information.  That is, that's how we *use* the word.
> > whereas consciousness would
> only
> > be some kind of impredictibility, "free will", thus affordable even
> > the simplest levels of organization.
> That is *not* what we mean by the word "consciousness".
> In _Elbow Room_, Daniel Dennett discusses extensively
> why "free will" doesn't depend upon either indeterminism
> or unpredictability.  That I predictably steer my car within
> the lane does not threaten my free will, and someone who
> drives down the wrong side of the street has likely lost
> their will and quite possibly their consciousness.

Larry Niven's novell Protector is very illustrative of this. The
Protectors in the novell are extremely ingenious beings capable
of staggering intelectual achievements, yet their behaviour is
severely constrained by their emotional drives.

Inteligence is a means with which to achieve goals, and is not
merely an end unto itself.

Simon Hibbs

Sent via
Before you buy.

More information about the Neur-sci mailing list