Location of language processing

Dennis McClain-Furmanski dmcclain at runet.edu
Fri Nov 3 03:41:28 EST 1995


belanm at tornade.ere.umontreal.ca wrote:
:  I would like to know if any study has been done to determine where
:  the center of language processing is located when deaf persons are
:  concerned. And if so, is it differently located than in ther non-deaf.

:  Marc A. Belanger.

Here's some relevant material. It may not directly answer, but might lead 
you to some answers via references. Remember that 'left' and 'right' are 
usually just majority cases. Some people are reversed and some are less 
differentiated than others.

The disclaimer comes first: this was an undergrad project. It is neither 
current nor necessarily very accurate. It was a rhetoric project more 
than a scientific one, though I tried to cover both as best as I could at 
the time.

Abstract
 
Sign languages, like many other languages, have been studied in terms
if their components.  After they have been broken down to their basic
units, they can be described and compared.  When these descriptions
are based on the components of the languages in question, they are
sufficient to describe.  However, unless the descriptions used are
sufficient to describe both of the languages being compared, they will
not serve as adequate for comparison.  This paper will attempt to
provide an initial deconstruction of a communication paradigm which
will be shown to be faulty in its construction due to its failure to
adequately comply with the empirical evidence on which it depends for
description, and to provide both a scientific and cultural rationale
for this deconstruction.
 
Introduction
 
     Spoken langauge is frequently studied in terms of morphemes, the
basic unit of meaning.  Likewise, sign languages are studied in their
components, called "cheremes" (Stokoe, 1972).  These basic units are
"tab", "sig" and "dez", corresponding to location of the sign,
configuration of the hands, and movement of the hands.  Yet these are
not enough to cover all the nuances of some signs.  As the author of
the above convention states, "a sentence, S, in American Sign Language
may be seen to have two sensible components which are both vehicles
for its intelligible import ... the facial, F, and the manual, M."
(Stokoe, 1972).  Facial representation has been added to notation
created to relate the above concepts, yet it too fails in some cases.
     It is perhaps a case of unintentional chauvinism on the part of
speaking linguists, who attempt to describe something based on the
concepts created from their own experience.  Unfortunately, there are
components of sign language which elude any description in terms of
spoken language, because they are inherently impossible to accomplish,
or at best, matters of evolution of the language and better studied as
etymology.
 
Spoken Language As Serial Communication
 
     All spoken langauge shares a characteristic which is familiar to
the study of cognitive science; it is serial.  That is to say, it
occurs with its individual components happening sequentially.
Specifically, it occurs in receiving in the order of (1) auditory
signal comprehension, (2) phoneme identification, (3) auditory word
identification, and (4) cognition, which includes the application of
semantic rules (Gordon, 1990), and in the reverse in the production of
speech, ending with signal production instead of comprehension.  Since
it is physically impossible to say more than one thing at the same
time, this seems an almost ridiculous observation to make.  Yet, if we
are to compare sign and spoken langauge, this concept must be
addressed.
 
Parallel Channels
 
     Sign language, being a visual language and involving up to the
entire body of the signer, can often contain components which occur
simultaneously.  In the computer metaphor used so often by cognitive
science, this is said to be parallel.  Even if the components of the
particular message being transferred arrive in a serial fashion, as
long as the syntactic construction of the message follows the general
syntactic construction rules of a non-serial based language, a linear
view used to explain the message will be inaccurate (Goodall, 1987).
Unless this specific difference is taken into account, the traditional
method of description as adapted to sign language will fail.
     This simultaneity manifests itself in various ways.  The first
and most obvious is a restatement of Stokoe's earlier observation.
There are more than one physical component to signs, and these must be
recorded and analyzed in the context of their common occurrence.
Second, following Stokoe's second comment, the sign consists of more
than the manual components, including the face and various other
elements of kinesics.  Third, and as far as I have been able to
determine almost unaddressed, is the simultaneous occurrence of more
than one otherwise independent sign to form a compound sign.  The sole
instance of publication which I have been able to find on this version
of simultaneity in communication has been a series of observations of
Japanese Sign Language, although the nature of the usages these
observations cover are identical to many instances of terms I am
personally familiar with in ASL.  The author of this work goes as far
as to label the use of the word "simultaneity" in this instance to be
a "cognitive" concept, rather than one of physical coincidence.
(Peng, 1978).
     As an example, the ASL sign for "standing" -- two fingers
extended downward onto the palm of the other hand, and the sign for
"not paying attention", head tilted to one side, pupils turned to the
corner of the eye tilted more upwards, and the tongue sticking out the
corner of the mouth tilted more downwards, represents the clause
"standing around not paying attention." This sign should be
differentiated from the series of two concepts presented by the
English "standing around and not paying attention" which indicates two
related but separate concepts.  If the intended meaning had been that
the person was standing around, and while doing so was not paying
attention, the signs would be presented in sequence.  Instead, the
condition of standing around obliviously, as a single statement of
behavior, is presented.
     The tendency towards simultaneous signing is responsible for the
creation of new signs from two or more old ones.  This pattern of
"blending" (Peng, 1978) is the source of such combined signs as the
familiar (as opposed to the intimate) "I love you", created out of
combined manual alphabet signs for "I", "L" and "Y".  It is also the
root of single signs with origins in two or more signs such as
"teacher", being both hands palm down with thumbs tucked below, and
brought down in an arc in front of the face, as opposed to the now
more formal individual signs for teach (the hands brought directly out
from the face) followed with the "agent" sign (hands directly in front
of the elbows, palms towards each other, and sliced downwards in
parallel).  This blending of signs is so common, and considered so
much a part of the language of sign, that the compound signs
"apple+drink" for apple juice was one of the instances used to support
the hypothesis of animal-acquired language, as quoted by Patterson in
her paper on the language development in the gorilla Koko.
(Patterson, 1978).
 
Parallelism in Signs
 
     It would hardly be prudent to consider a change, addition or
deletion of a theoretical framework without showing how it might be of
some use.  The consideration of the parallel information being
produced, transmitted, received and processed, as multiple channels of
simultaneous and mutually supporting information, can be used to
explain such phenomena as the syntactic structure of American Sign
Language.  In ASL, it is just as proper to sign "my + book" as it is
to sign "book + my".  Similar arrangements of noun and verb pairs,
subject and object ordering, and clauses within sentences are so
frequent that a common explanation for this is that ASL has little or
no set syntactic ordering of its components, (Wilbur, 1987), and some
have even gone as far as to say that ASL has developed "avoidance
strategies to compensate for constrained syntactic rules", (Freidman,
1976) almost as though the structure of ASL were somehow cheating in
the production of language as compared to the rigidity of English.
While variable syntax is a suitable explanation when only considering
the language from a serial-oriented viewpoint, it does not address how
such arrangements can be considered equivalent in semantic content.
      Wilbur (1987) addresses this by using the concept of "iconic
representation", which is described as "a reflection in language of
the actual state of affairs in the real world".  If one accepts that
the information transmitted in sign language resides in more than one
channel, then it is not a large step to accept that operating in a
real world with multiple channels of communication leads one's mind to
think in representations of that information as they occur, that is,
as parallel pieces to a single puzzle.  Rather than being a
work-around for a highly structured (and questionably considered
'standard') spoken langauge, the variable syntax of sign language
instead represents the nature of the method of processing the
information, which in terms of cognitive science is parallel
processing of information received in parallel channels, even though
the language is sometimes constrained to serialized or segmented,
sequential production due to the limited amount of information which
can be conveyed with each individual sign.
     This has led at least one researcher to apply the concepts of
"relational grammar" to the study of ASL (Padden, 1988), as being
composed of "primitives", or grammatical relationships which are not
expressed as word or clause order, but instead create the rules where
by word and clause order are governed.  This research indicates that
there is indeed ordering in sign language, but it is as often the
result of the easiest transition between different signs as any
semantic consideration, yet it carries the same information content.
I personally feel it to be quite a stretch of the intent of the term
grammar to try to include the convenience of physical motion as a
governing dimension in communication within that term.
 
Lateralization and Specialization
 
     In order to promote a parallel view of sign communication, it is
necessary to differentiate and justify this from the serial view
presented by previous empirical investigation.  In support of his
theory of syntax, Chomsky (1965) stated that "It may well be that the
general features of langauge structure reflect, not so much one's
experience, but rather the general character of one's capacity to
acquire knowledge."  The sequential production fo spoken language
outlined above corresponds well with the sequence of neuroanatomical
structures involved in production of spoken language, specifically the
hippocampal and cortical memory areas, Wernicke's area, Broca's area,
and the motor cortex (Kalat, 1992).  If it can be taken that the brain
structures involved, in less formal terms, the hard wiring is indeed
responsible for the structure of the language, then one would expect
that a visually oriented language would make use of equivalent
structures within the visual area.
     In fact there are fundamental differences in the portions of the
brain used for visual comprehension as opposed to verbal (the
corresponding motor activity being controlled in all cases by the
motor cortex).  Friedrich (1990) observes that the left hemisphere,
which controls the primary verbal areas, operate in an analytic and
sequential manner, whereas the right hemisphere, which controls the
spatial and visual recognition functions, operates primarily in a
holistic and simultaneous manner.  In the case of deaf communication,
it has been shown (McKeever, Hoemann, Florian & VanDeventer, 1976)
that the cerebral functions governing communication in deaf persons is
not the same as those in hearing persons.  More specifically, Ross,
Pergament and Anisfeld (1979) showed that the right hemisphere of deaf
persons was specialized for sign recognition, and was used in exactly
the same fashion with very similar results as the left hemisphere of
hearing persons was specialized for word recognition.
     If those areas of the right hemisphere used in the production
and comprehension of sign language were fundamentally different from
the corresponding left hemisphere speech areas, we would expect to
find that the functioning within those areas to differ also. This is
in fact the case.  As Shand and Kilma (1981) show the differences in
processing of sign language are due in part to differences in sensory
storage, and also differences in processing of "static vs.
changing-state" input, but even these are not sufficient to cover all
the differences.
     These differences cannot be attributed to sensory-motor memory,
according to Liben, Nowell and Posnansky (1978), as the memory of
signs in deaf people is not organized according to the physical
determinants of the sign components.  Bellugi, Kilma and Siple (1975)
further show that the memory storage of signs in deaf persons is
organized according to "simultaneous formational parameters" which
when taken as individual components are found to be arbitrary and
therefore void of inherent meaning.  It is this existence of meaning
in simultaneous components which individually are meaningless which
point solidly at a holistic basis for considering the meaning in the
signs, and a multiple parallel channel paradigm for describing their
transfer.
 
Cognitive Computational Model
 
     Although the mechanistic view posited by cognitive science,
examining the mind as a system of functional components integrated for
the purpose of computation, is somewhat reductionist, and therefore
less than ideal for investigation of holistic concepts such as the
operation of the right hemisphere, it does lend itself well to
creation of some fairly inclusive models of mental performance.  In
fact, the design of neural networks as programs for parallel
processing computers, those computer systems intended to mimic some
operations of the human brain, are a primary result of successful
application of the theory to technology (Willis, Montague, Morris &
Tham, 1993).
     Previous computer designs were all basically serial in nature;
they acted on data and instructions in a linear fashion.  Although
these systems were capable of processing the information required to
produce a realistic looking image, the time frame required was too
long to allow sequential pictures to be shown to give any sense of
motion.
     Lately, computers with several processors working in parallel
have been used to more efficiently portray three dimensional images.
Only these parallel devices are able to handle the enormous mass of
data fast enough to display movement in the image in anything
approaching real time equivalency.  Justifying the concept of parallel
processing in the brain by drawing an equivalent from computer science
would seem quite a stretch. However, the flow of concepts runs in the
opposite direction in this case.
     According to W. Daniel Hillis (1987), designer of the massively
parallel computer The Connection Machine, his blueprint for the design
of his machine was the human mind, due entirely to the fact that it
uses the principle of parallel processing to comprehend an entire
image all at once. It is able to do so due to the enormous number of
interconnections between neurons, even though the neurons themselves
are thousands of times slower than the silicon gates in computer
chips.
     The parallel processing of image information proceeds even prior
to reaching the brain.  The neurons of the retina are interconnected
in such a way as to respond to aggregate stimuli such as an edge as a
single non-point entity (Hayward & Varela, 1992).
     In comprehending the complete sign, all aspects of its
production must be considered.  In the cognitive terms of Lieser and
Gilleron (1990), in order to perceive the brain must calculate
simultaneously any impinging aspects of space, time and length.  Each
of these may contain any number of components, including continuous
spectrums of measurement as opposed to discrete entities.  While a
linear or serial computational model may be used to calculate each of
these given enough time, the requirement of simultaneity of processing
in order to construct a conscious image of a three dimensional moving
figure such as a signer precludes serial processing as a possibility.
     While this would seem to serve as the necessary basis for
describing visual imagery, Rollins (1989) asserts that these same
computational structures are required for the production of language
of any sort.  His components to language which act as functional
equivalents to space, time and length are semantics, process and
syntax.
     From this it seems that not only is the cognitive computational
model useful in the understanding of visual modes of communication and
specifically sign langauge, but quite possibly in understanding
language in general.
 
Science and Social Responsibility
 
     A society which supports science should expect that science to
in turn act responsibly towards that society.  In the case of the
investigation of sign language, this becomes apparent when we consider
that the particular views adopted by science can have a significant
effect on how society behaves towards the deaf segment of that
society.
     There has been considerable bias towards the deaf in the past,
and quite often it is based on the view that they are deficient in
communicative ability.  This opinion is fed significantly if the
consideration of the people is based on their mode of communication,
and this in turn is considered on the basis of an ineffective
cognitive model which wrongfully labels these people as deficient.
     If one insists on using the serial processing and transmission
model of communication when considering the parallel processing and
transmission of the deaf, one can end up with results similar to those
of Bishop (1983) which are based on an "inherent grammatical problem"
made visible by basing the test on rigid ordering of response in
verbal testing.  This is obviously a flawed concept if one
participates in a mode of communication where grammatic ordering is
irrelevant.  Results such as this have been used to justify the
concept of disability in the deaf, even though testing has shown that
deafness does not correspond to cognitive deficit (Lillo-Martin,
Hanson & Smith, 1992).
     Even the idea that sign language is a substitute for "real"
language should be questioned when findings show that it is learned
faster that spoken language (Orlansky & Bonvillian, 1985).
Anthropological investigation has shown that it is likely that sign
language was in fact the first language used by humans (Hewes, 1973;
Teodorsson, 1980).
     When one investigates the more useful language concepts such as
creativity, then one finds results such as those from Marscharck and
West (1985) which indicate that basing language testing on
English-like concepts seriously underestimates the cognitive and
linguistic abilities of the deaf.  Even Geoffery Coulter (1993),
editor of Phonetics and Phonology:  Current Issues in ASL Phonology,
questions the validity of using the concept of syntactic comparisons
in investigating sign language, due to the unique nature of sign
language.
     If one follows the theoretical model that the mode of
communication used allows one preferential access to the hemisphere
which processes it, then one can arrive at the conclusion that use of
sign language would provide for better access to the right hemisphere
and its holistic computational operation. This is in fact suggested by
Hendren (1989); sign language is suggested as a communicative tool for
all students for just this type of thinking.
     A significant point for its inclusion into eduction is made by
the observation that if a person learns to sign first, and speak
later, then that person is able to communicate in both ways at once,
with the speech following the normal rules of grammar, and the sign
following its normal rules flexible ordering, without conflict of
confusion between the two (Jones & Quigley, 1979). This goes beyond
the concept of bilingualism, and offers people a chance to communicate
from both hemispheres at once, creating clearer and more meaningful
communication.
     Given the above, it is understandable that the assumed supremacy
of spoken language as the standard by which communication is measured
has come into question (Televik, 1981).  This should be reflected in
the way that deaf people are viewed by society.
     Charrow and Wilbur (1975) suggest that rather than disabled, it
would be more appropriate to consider the deaf as merely the third
largest non-English speaking minority in the U.S., and that they
should be treated in the respect of a cultural minority rather than a
communicatively disabled one.  This seems to be preferential to the
existing view which results in society trying to force English
speaking concepts on these people.  Such efforts are doomed to failure
(Grove, O'Sullivan & Rodda, 1979), and actually result in damage to
the individual (Stokoe, 1978) by alienating them from their native
culture, which is inseparable from their language (Stevens, 1980).
 
Conclusion
 
     Although science must proceed as a lowest common denominator in
its descriptions of the real world, and as such requires a language of
its own which communicates all of the various parts of the world which
it attempts to describe, it also is responsible to be the clearest
form of communication.  While those theoretical concepts produced
based on serial communications are adequate for many purposes, and
understandable by most people, they are not adequate to describe
communications which occur in a fashion outside the realm of
experience of their backgrounds, i.e.  parallel forms of communication
such as sign langauge.  As stated by Lieberman (1984) "the initial
functional value of syntax may rest in the limitation of semantic
interpretation." As long as the semantic interpretation of a language
is based on concepts found to be too limited for a complete
understanding, the resulting observations of syntactic structure will
be doomed to failure as proper descriptions of the reality they
portray.
     In order to describe sign language properly, and particularly to
compare it with spoken languages, it is necessary to use a theoretical
framework which takes into accurate account the natures of the
languages.  Concepts derived from serial based languages are
insufficient to describe parallel based languages, and are therefore
insufficient for comparisons.
     When social factors regarding the results of one paradigm show
obvious bias and damage, and a competing paradigm appears to reduce or
nullify the negative effects, while providing for better communication
for all as well as better description of reality, I think it
imperative for science and society either accept the latter, or
provide for a yet greater improvement in science while still reducing
the bias created by the outdated paradigm.
 
REFERENCES
 
Bellugi, U., Klima, E. & Siple, P.  (1975).
   Remembering in Signs.  Cognition, 3,
   93-125.
 
Bishop, D. V.  (1983).  Comprehension of english
   syntax by profoundly deaf children.  Journal
   of Child Psychology and Psychiatry and the
   Allied Disciplines, 24, 415-434.
 
Charrow, V. & Wilbur, R. B.  (1975).  The deaf
   child as a linguistic minority.  Theory Into
   Practice, 14, 353-359.
 
Chomsky, N.  (1965).  Aspects of the theory of
   syntax (p. 180).  Cambridge, MA:  MIT Press.
 
Coulter, G. and Anderson, S.  (1993).
   Introduction.  In  Geoffery Coulter (Ed.)
   Phonetics and phonology, Volume 3:  Current
   issues in ASL phonology.  San Diego, CA:
   Academic Press.
 
Friedman, L.  (1976).  Phonology of a soundless
   language:  Phonological structure of American
   Sign Language.  Doctoral dissertation,
   University of California, Berkeley.
 
Friedrich, F. J.  (1990).  Frameworks for the
   study of Human Spatial Impairments.  In R. P.
   Kesner & D. S. Olton (Eds.), Neurobiology of
   comparative cognition (pp 317-338).
   Hillsdale, NJ:  Lawrence Erlbaum Associates.
 
Goodall, G.  (1987). Parallel structures in
   syntax (p. 172).  Cambridge, UK:  University
   Press.
 
Gordon B.  (1990).  Human language. In R. P.
   Kesner & D. S. Olton (Eds.), Neurobiology of
   comparative cognition (pp 21-50).  Hillsdale,
   NJ:  Lawrence Erlbaum Associates.
 
Grove, C., O'Sullivan, F. D. & Rodda, M.  (1979).
   Communication and language in severely deaf
   adolescents.  British Journal of Psychology,
   70, 531-540.
Hayward, J. W. & Varela, F. J. (1992).  Gentle
   Bridges (p. 59).  Boston:  Shambala.
 
Hendren, G. R.  (1989).  Using sign language to
   access right brain communication:  A tool for
   teachers.  Journal of Creative Behavior,
   23, 116-120.
 
Hewes, G. W.  (1973).  Primate communication and
   the gestural origin of language.  Current
   Anthropology, 14, 5-24.
 
Hillis, W. D.  (1987, June).  The connection
   machine.  Scientific American, pp. 174-182.
 
Jones, M. L., & Quigley, S.  (1979).  The
   acquisition of question formation in spoken
   english and american sign language.  Journal
   of Speech and Hearing Disorders, 44,
   196-208.
 
Kalat, J. W. (1992)  Physiological Psychology
   (p. 175).  Pacific Grove, CA:  Brooks/Cole.
 
Liben, L. S., Nowell, R. C. & Posansky, C. J.
   (1978).  Semantic and formational clustering in
   deaf and hearing subjects' free recall of
   signs.  Memory and Cognition, 6, 599-606.
 
Lieberman, P.  (1984).  The biology and
   evolution of a language.  Cambridge, MA:
   Harvard.
 
Lieser, D. & Gillieron, C.  (1990).  Cognitive
   science and genetic epistemology  (p. 25).
   New York:  Plenum.
 
Lillo-Martin, D. C., Hanson, V. L. & Smith, S. T.
   (1992).  Deaf readers' comprehension of
   relative clause structure.  Applied
   Psycholinguistics, 13, 13-30.
 
Marscharck, M. & West, S.  (1985).  Creative
   language abilities in deaf children.  Journal
   of Speech and hearing Research, 28, 73-78.
 
McKeever, W. F., Hoemann, H. W., Florian, V. A. &
   VanDeventer, A. D.  (1976).  Evidence of
   minimal cerebral asymmetries for the processing
   of english words and american sign language in
   the congenitally deaf.  Neurophysiologia,
   14, 413-423.
 
Orlansky, M. & Bonvillian, J.  (1985).  Sign
   language acquisition:  Language development in
   children of deaf parents and implications for
   other populations.  Merrill-Palmer Quarterly,
   31, 127-143.
 
Padden, C.  (1988).  Interaction of morphology
   and syntax in American Sign Language.  New
   York:  Garland.
 
Patterson, F.  (1978).  Linguistic capabilities of
   a lowland gorilla.  In F. C. C. Peng (Ed.)
   Sign language and language acquisition in man
   and ape.  (pp. 161-202).  Boulder, CO:
   Westview Press.
 
Peng, F.  (1978).  Introduction.  In F. Peng (Ed.)
   Sign language and language acquisition in man
   and ape.  Boulder, CO:  Westview Press.
 
Rollins, M.  (1989). Mental imagery (p. xv).
   New Haven, CN:  Yale.
 
Ross, P., Pergament, L. & Anisfeld, M.  (1979).
   Cerebral lateraliztion of deaf and hearing
   individuals for the linguistic comparison
   judgements.  Brain and Language, 8, 69-80.
 
Shand, M. A. & Klima, E.  (1981).  Nonauditory
   suffix effects in congenitally deaf signers of
   american sign language.  Journal of
   Experimental Psychology Human Learning and
   Memory, 7, 464-474.
 
Stevens, R.  (1980).  Education is schools for
   deaf children.  In C. Baker & R. Battison
   (Eds.), Sign language and the deaf community
   (pp. 177-191).  Silver Springs, MD:  National
   Association of the Deaf.
 
Stokoe, W.  (1972).  Semiotics and human sign
   languages.  The Hague, Netherlands:  Mouton &
   Co.
 
Stokoe, W.  (1978).  Sign codes and sign language:
   Two orders of communication.  Journal of
   Communication Disorders, 11, 187-192.
 
Televik, J. M.  (1981).  Language and problem
   solving ability:  A comparison between deaf and
   hearing adolescents.  Scandanavian Journal of
   Psychology, 22, 97-100.
 
Teodorsson, S. T.  (1980).  Autonomy and
   linguistic status of nonspeech language forms.
   Journal of Psycholinguistic Research, 9,
   121-145.
 
Wilbur, R.  (1987).  American sign language:
   Linguistic and applied dimensions.  Boston,
   MA:  Little, Brown.
 
Willis, M. J., Montague, G. A., Morris, A. J. &
   Tham, M. T.  (1993).  Artificial neural
   networks: A possible tool for the process
   engineer. In E. Rogers & Y. Li Parallel
   processing in a control systems environment
   (p. 109).  New York:  Prentice Hall.




More information about the Neur-sci mailing list