Aaron Sloman wrote on 29 Oct 1999 18:00:59 GMT: [...]
> If we build robots with human-like capabilities we shall probably
> have to give them, or design into them the ability to develop,
> appropriate ontologies including mental states of others,
> otherwise they will simply be ineffective in an environment
> containing human beings and other intelligent robots.
> When such robots have suitably rich ontologies, and know how
> to apply them both in thinking about others and in thinking
> about themselves, the question whether they *really* have
> mental states will just be a silly one.
> By the way, I suspect that newborn human babies don't have
> the sort of mentalistic ontology I've described. Instead they
> are born extremely immature and are genetically pre-programmed
> with reactive behaviours (e.g. reactions to human faces) which
> fool parents into thinking their little darlings care about them.
> This is crucial in triggering the appropriate nurturing and
> protective behaviour in adults which will help the infant
> bootstrap an appropriate ontology while its brain is growing.
PD AI also bootstraps an ontology of user/programmer-chosen words
and concepts to bring the artifical mind quickly up to speed.
leading to a Technological Singularity is the Grand Challenge
for programmers and software engineers in the Y2K Millennium.
> It's a delicate process and can go wrong in many ways.
> I hope that makes sense.
> Some of the ideas are developed further in papers in the
> Cognition and Affect project directory, though there's a
> lot more work still to be done.
> Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs/ )
> School of Computer Science, The University of Birmingham, B15 2TT, UK
> EMAIL A.Sloman AT cs.bham.ac.uk (NB: Anti Spam address)
> PAPERS: http://www.cs.bham.ac.uk/research/cogaff/