Keynote speakers


Josep Call (Max Planck Institute for Evolutionary Anthropology, Leipzig)

Iconicity, reference and motives in the gestural communication of the great apes
Great apes use a variety of gestures for a variety of purposes to communicate with their conspecifics.  I will explore three aspects of the  natural gestural communication of the great apes:  iconicity, reference and motives.  Next I will turn my attention to artificial systems of communication between apes and humans and revisit the issue of iconicity, reference and motives in ape communication.  I use the term artificial communication (as opposed to natural communication) to refer to the use or comprehension of gestures borrowed from the natural repertoire of another species.  In the case of the apes this includes things like index finger pointing or sign language.  I will propose that artificial communication systems differentially affect the three aspects under scrutiny in this talk.  Whereas artificial communication allows apes to engage in displaced reference, it does not seem to substantially alter their motives for communication.  I will conclude by speculating about the impact that artificial systems, communicative or otherwise (e.g., Arabic numerals, symbolic tokens) have on the way individuals think about and solve problems.

 

Alan Cienki (VU University Amsterdam & Netherlands Institute for Advanced Study)

Language as a variably multimodal phenomenon
For many years, a number of scholars from gesture studies (e.g., Kendon, McNeill, and others) have argued that gesture and language form an integrated whole. The field of linguistics that has perhaps been the most receptive to this claim is cognitive linguistics, which is now adapting in recognition of the significance of gesture. For example, within Cognitive Grammar, pairings of vocalizations and conceptualizations are claimed to become schematized into linguistic units through repeated instantiations in “usage events,” and this characterization has been elaborated to include “any other kinds of signals, such as gestures and body language” (Langacker 2008: 457). What would it mean for this theory of language if it pursued these claims seriously?
One answer might be that it would take language as a completely multimodal phenomenon (audio and visual). However, I will argue that this approach does not provide the best account, given that speech and gesture have different communicative statuses, gesture is not always used by speakers nor always seen by addressees, gesture use differs per culture, etc. Instead, building on work in Relevance Theory (Sperber & Wilson 1995), the selective activation of meaning (Müller 2008), and attentional analysis of meaning (Oakley 2009), I will pursue the argument that language is a flexibly dynamic category, which is only variably multimodal. The model of language proposed is one structured as a center-periphery category, with a prototype-center being the spoken words and grammar that are the traditional object of study within linguistics, and various positions outside the center being held by other behaviors that are potentially highlighted in usage events (such as intonation, gestures of various sorts, object manipulation, and others).
This description fits with an understanding of language as a semiotic system that overlaps with other semiotic systems (such as co-speech manual gesture) with which it interacts. In addition, it provides a model for describing any particular language in use in real time. A speaker can flexibly change attentional focus, sometimes making use of a larger scope of expressive behaviors than others, or shifting the focus temporarily from the prototype of spoken words and grammar to gesture, object manipulation, or even a vocalized intonation contour without words. Conversely, addressees’ focus can also shift variably, from paying attention to a speaker’s words without visual cues (e.g., when listening to the radio) to being an attendantly observant and listening participant in face-to-face interaction (as in the case of a perceptive therapist).
This model of language thus comes with another construct, namely the scope of relevant behaviors deployed or, conversely, taken into consideration. This flexible scope will be discussed with respect to the semantics-pragmatics distinction and its treatment as a continuum in cognitive linguistics. The changeable scope is like a sliding scale that can variably take in more or fewer semiotic systems beyond the prototype of words and grammar as being relevant. The variations – between individuals and from moment to moment – in the expressive behaviors taken into account provide the basis for characterizing language as variably multimodal.


Georg Goldenberg (Klinikum Bogenhausen, Technische Universitaet Muenchen)

Apraxia and the neural basis of gesturing
Nearly 150 years ago Paul Broca reported of patient TanTan who had lost articulated language after left frontal brain brain damage. In this seminal case report he noted that the patient’s gestures were vivid but frequently incomprehensible. Ten years later the German psychiatrist Finkelnburg postulated that aphasic patient suffer from a general “asymbolia” which affects the production of communicative gestures as much as that of speech. In early 20th century Hugo Liepmann developed the concept of “apraxia”. He characterized it as a sequel of left brain damage which frequently accompanies aphasia but is an independent additional symptom. Disturbances of communicative gestures were central to this concept but, in contrast to the early observation, these disturbances were evaluated by examination of their performance on command rather than in spontaneous communication, and this restriction still prevails in research and clinical diagnosis of apraxia.
In the first part of my contribution I will present and discuss results from studies testing the performance of communicative gestures on command. I will concentrate on miming of tool use as this is the most widely used and arguably the theoretically most interesting type of communicative gestures tested in the clinical examination for apraxia. I will show examples of disturbed miming in patients with left brain damage, aphasia and apraxia, discuss the relationships of disturbances to language and to actual tool use and present data on the localization of lesions interfering with miming of tool use. They show that disturbances are, at least in right handed patients, invariably caused by left brain damage and suggest that within the left hemisphere frontal lesions are more important than parietal lesions.
In the second part of my speech I will discuss whether defective performance of gestures on command predicts insufficient gesturing in spontaneous communication. Beyond its theoretical interest the answer to this question has ecological significance because apraxic patients have also aphasia, and a rich repertoire of comprehensible gestures could help them to compensate the shortcoming of verbal expression. Results from a study of aphasic patients’ gesturing during attempts to retell short Video clips support such a relationship. Both diversity and comprehensibility of their gestures correlated with their success in miming tool use on command. This finding implies that the production of spontaneous expressive gestures also depends on integrity of left hemisphere, and particularly left frontal brain regions. This suspicion is corroborated by lesion analyses.
In the final part of the speech I will discuss the limits of our results. Even when including observation of spontaneous gestures, they are restricted to referential gestures produced in a monologue without support from communication partners. Anecdotic observations of communicative interaction between aphasic patients and their partners suggest that in such interaction modulating gestures and simple emblems can be efficient for compensating not only the loss of verbal expression but also the degradation of referential gestures.

Susan Goldin-Meadow (University of Chicago)

How our hands help us think
When people talk, they gesture.  We now know that these gestures are  associated with learning.  They can index moments of cognitive instability and reflect thoughts not yet found in speech. But gesture may be able to do more than just reflect learning -- it may be involved in the learning process itself.  I consider two non-mutually exclusive possibilities.  First, gesture could play a role in the learning process by displaying, for all to see, the learner's newest, and perhaps undigested, thoughts.  Parents, teachers, and peers would then have the opportunity to react to those unspoken thoughts and provide the learner with the input necessary for future steps.  Second, gesture could play a role in the learning process more directly by providing another representational format, one that would allow the learner to explore, perhaps with less effort, ideas that may be difficult to think through in a verbal format.  Thus gesture has the potential to contribute to cognitive change, directly by influencing the learner and indirectly by influencing the learning environment.

Adam Kendon (Naples, Philadelphia)

 Accounting for gesture as a component of utterance: an evolutionary approach
"Gesture first" theories of language origins do not offer compelling accounts of how or why human language uses speech as its main vehicle. At the same time, "speech first" theories omit any explanation as to why the forelimbs, especially, are so often intimately involved in utterance production.  However, because vocalizations are always embedded components of acts directed to practical outcomes, other instruments of environmental modification, pre-eminently the hands, will often become involved.  I suggest that as the babble of prosodic protolanguage became linguistic speech, the forelimb and hand actions that must have been an abundant component of interpersonal interactions in early hominids, as they are in apes today, were gradually recruited into linguistic functions. 


Roland Posner (Technische Universitaet Berlin)

The intentionality of body behavior
 Facial expression, gesture, and posture are widely regarded as meaningful body behavior, and gesture research has so far concentrated on gestures as instruments of communication. The present lecture argues that this is a rather one-sided approach which can be highly misleading. It examines the cognitive states of persons exhibiting and interpreting a given kind of body behavior, asking what they intend, believe, intend their partners to believe, and believe their partners to intend. This investigation reveals that only a small part of human body behavior can be conceived as communication in the sense of Grice or as speech acts in the sense of Searle. It offers a unified description of all conceivable behavior types above the complexity level of meaningless physical movements and below the complexity level of communication.

Juergen Streeck (The University of Texas at Austin)

Gesturecraft – A Practice Perspective
Gesture is examined as a family of skilled practices, as part of the equipment with which human beings inhabit and understand the world together. Drawing on micro-ethnographic research in diverse social and practical contexts, I delineate some of the range of communicative and cognitive tasks which gestures of the hand help us solve, as well as of the heterogeneity of the practices that we summarily call "gesture". How gestures facilitate interaction and shared understanding can be clarified in ecological terms, by delineating how they differentially mediate between speaker, addressee, the world at hand, the narrated world, and the repertoire of bodily schemata in terms of which we can construe communicative content.

I wil give particular attention to the manual qualities of hand gestures: how they work cannot be sufficiently explained without taking seriously the structure of the human hand and how humans apprehend, make, and conceive the world by the hands. The practice perspective on gesture thus combines the phenomenological view of the living body—which focuses on its position as a "mindful" agent in the world—with the sequential analysis of moments of interaction and sense-making. Embodied communication means to inhabit the world together—not simply to represent it. 

Sherman Wilcox (University of New Mexico, Albuquerque)

Language in Motion
The title of my presentation is an intentional double entendre. One sense derives from the primary data that I will bring to bear — the moving, visible signed languages of deaf communities. The second meaning points to the fundamental claim that I will make, that grammar emerges from the production and perception of biological motion.

My talk will be structured in three parts. In the first, I will apply dynamic systems theories to the unification of three systems of biological motion — spoken language, signed language, and gesture. I will argue that all three systems are composed of articulatory gestures, which are specializations of a more general capacity to impose meaning on perceived biological motion.

In part two, I will examine the role played by motion in the grammar of signed languages. A key goal will be to demonstrate the conceptual significance of moving bodies in the emergence of embodied grammar. I will also demonstrate how viewing signed languages and gesture as articulatory gesturing allows us to explore the developmental interface between these two communicative systems.

Finally, in part three, I will turn to the topic of the evolution of language. I will present a new hypothesis that one key aspect of syntax, temporal ordering, is inherent in biological motion. Thus, I will argue that syntax is linked to the neural mechanisms that underlie the organization and production of movement.