Yale Colloquium on Searle's Chinese Room & Symbol Grounding

Stevan Harnad harnad at phoenix.Princeton.EDU
Wed Apr 4 11:21:56 EST 1990


             4 o'clock p.m. Thursday April 5
Computer Science Department Room 200, Arthur K Watson Building
		      Yale University

     Searle's Chinese Room and the Symbol Grounding Problem

                     Stevan Harnad
   		     Psychology Department
   		     Princeton University

The philosopher John Searle's celebrated "Chinese Room Argument" has
not stopped causing frustration in the artificial intelligence (AI)
community since it first appeared in 1980. The Argument tries to show
that a computer running a program can't have a mind even if it can pass
the "Turing Test" (which means you can write to it as a pen-pal till
doomsday and never have reason to doubt that it's really a person
you're writing to). Searle shows that he can do everything the computer
does without understanding the messages he is sending back and forth,
so the computer couldn't be understanding them either. AI people think
the "system" understands, even if Searle doesn't. Searle replies that
he IS the system... Having umpired this debate for 10 years, I will try
to show who's right about what.

There is a deeper side to the Searle debate. Computer programs just
manipulate meaningless symbols in various symbolic codes. The
interpretation of those symbols is made by us. Without our
interpretations, a symbol system is like a Chinese/Chinese dictionary:
Look up one meaningless symbol and all you find is some more
meaningless symbols. This means that a mind cannot be just a symbol
manipulating system, as many today believe. The symbols in a symbol
system are ungrounded, whereas the symbols in our head are grounded in
the objects and events they stand for. I will try to show how the
meanings in a symbol system could be grounded bottom-up in two kinds of
nonsymbolic representation (analog copies of the sensory surfaces and
feature detectors that pick out object and event categories) with the
currently fashionable neural nets providing the learned "connection"
between elementary symbols and the things they stand for.

-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad at clarity.princeton.edu       srh at flash.bellcore.com
harnad at elbereth.rutgers.edu    harnad at pucc.bitnet    (609)-921-7771



More information about the Bioforum mailing list