Learning a Model of Speaker Head Nods using Gesture Corpora

Learning a Model of Speaker Head Nods using Gesture Corpora” by Jina Lee and Stacy Marsella. In 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), (Budapest, Hungary), 2009, pp. 189-296.

Abstract

During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions. Our goal is to develop a model of the speaker's head movements that can be used to generate head movements for virtual agents based on gesture annotation corpora. In this paper, we focus on the first step of the head movement generation process predicting when the speaker should use head nods. We describe our machine-learning approach that creates a head nod model from annotated corpora of face-to-face human interaction, relying on the linguistic features of the surface text. We also describe the feature selection process, training process, and the evaluation of the learned model with test data in detail. The result shows that the model is able to predict head nods with high precision and recall.

BibTeX entry:

@inproceedings{Lee_AAMAS09,
   author = {Jina Lee and Stacy Marsella},
   title = {Learning a Model of Speaker Head Nods using Gesture Corpora},
   booktitle = {8th International Conference on Autonomous Agents and
	Multiagent Systems (AAMAS)},
   number = {},
   pages = {189-296},
   address = {Budapest, Hungary},
   year = {2009},
   url = {http://www.ccs.neu.edu/~marsella/publications/pdf/Lee_AAMAS09.pdf}
}

(This webpage was created with bibtex2web.)

Back to Stacy Marsella.