Multimodal Interaction in Human Communication
スポンサーリンク
概要
- 論文の詳細を見る
We are developing multimodal man-machine interfaces through which users can communicate by integrating speech, gaze, facial expressions, and gestures such as nodding and finger pointing. Such multimodal interfaces are expected to provide more flexible, natural and productive communications between humans and computers. To achieve this goal, we have taken the approach of modeling human behavior in the context of ordinary face-to-face conversations. As the first step, we have implemented a system which utilizes video and audio recording equipment to capture verbal and nonverbal information in interpersonal communications. Using this system, we have collected data from a task-oriented conversation between a guest (subject) and a receptionist at company reception desk, and quantitatively analyzed this data with respect to multi-modalities which would be functional in fluid interactions. This paper presents detailed analyses of the data collected : (1) head nodding and eye-contact are related to the beginning and end of speaking turns, acting to supplement speech information ; (2) listener responses occur after an average of 0.35 sec. from the receptionist's utterance of a keyword, and turn-taking for tag-questions occurs after an average of 0.44 sec.; and (3) there is a rhythmical coordination between speakers and listeners.
- 社団法人電子情報通信学会の論文
- 1995-06-25
著者
-
Sakamoto Kenji
Real World Computing Partnership Novel Functions Sharp Laboratory In Integrated Media Laboratories S
-
Togawa Fumio
Real World Computing Partnership Novel Functions Sharp Laboratory In Integrated Media Laboratories S
-
Watanuki Keiko
Real World Computing Partnership, Novel Functions Sharp Laboratory in Integrated Media Laboratories,
-
Watanuki Keiko
Real World Computing Partnership Novel Functions Sharp Laboratory In Integrated Media Laboratories S