Exploiting visual information for NAM recognition
スポンサーリンク
概要
- 論文の詳細を見る
Non-audible murmur (NAM) is an unvoiced speech received through body tissue using special acoustic sensors (i.e., NAM microphones) attached behind the talkers ear. Although NAM has different frequency characteristics compared to normal speech, it is possible to perform automatic speech recognition (ASR) using conventional methods. In using a NAM microphone, body transmission and the loss of lip radiation act as a low-pass filter; as a result, higher frequency components are attenuated in NAM signal. A decrease in NAM recognition performance is attributed to spectral reduction. To address the problem of loss of lip radiation, visual information extracted from the talker's facial movements is fused with NAM speech. Experimental results revealed a relative improvement of 39% when fused NAM speech and facial information were used as compared to using only NAM speech. Results also showed that improvements in the recognition rate depend on the place of articulation.
- The Institute of Electronics, Information and Communication Engineersの論文
著者
-
Beautemps Denis
GIPSA-lab, Speech and Cognition Department CNRS UMR 5216 / Stendhal University / UJF / INPG
-
Tran Viet-Anh
GIPSA-lab, Speech and Cognition Department CNRS UMR 5216 / Stendhal University / UJF / INPG
-
Heracleous Panikos
GIPSA-lab, Speech and Cognition Department CNRS UMR 5216 / Stendhal University / UJF / INPG
-
Loevenbruck Helene
GIPSA-lab, Speech and Cognition Department CNRS UMR 5216 / Stendhal University / UJF / INPG
-
Bailly Gérard
GIPSA-lab, Speech and Cognition Department CNRS UMR 5216 / Stendhal University / UJF / INPG