1P1-K06 Audio-Visual Speaker Detection in Human-Robot Interaction
スポンサーリンク
概要
- 論文の詳細を見る
Tracking humans' position is a useful skill for the coming generation of mobile robot. It is a challenging problem of planning and control in dynamic environment. We propose the omni-directional estimation method of speaker's position using the combination of audio and visual information. Estimation of the position of the sound is carried out to calculate the difference of arrival time from sound source to multi-channel microphones. The robust human template matching on the omni-directional image is employed to combine the result of sound source estimation to realize a highly accurate estimation of speaker's location. In our experiments, the systems were implemented and tested on an omni-directional robot at our laboratory. The results show that we are able to reliably detect and track moving objects in natural environment.
- 一般社団法人日本機械学会の論文
- 2007-05-11
著者
-
IMAI Jun-ichi
The University of Electro-Communications
-
Kaneko Masahide
The University Of Electro-communications
-
SUWANNATHAT Thatsaphan
The University of Electro-Communications
関連論文
- Evolutionary Computation System for Musical Composition using Listener's Heartbeat Information
- Evolutionary Computation System for Musical Composition using Listener's Heartbeat Information
- Evolutionary Computation System for Musical Composition using Listener's Heartbeat Information
- 1P1-K06 Audio-Visual Speaker Detection in Human-Robot Interaction
- Human Interactions With a Robot That Recognizes Differences Between Fields of View
- Facial Expression Recognition Under Partial Occlusion Based on Facial Region Segmentation