A Novel Visual Speech Representation and HMM Classification for Visual Speech Recognition
スポンサーリンク
概要
- 論文の詳細を見る
This paper presents the development of a novel visual speech recognition (VSR) system based on a new representation that extends the standard viseme concept (that is referred in this paper to as Visual Speech Unit (VSU)) and Hidden Markov Models (HMM). Visemes have been regarded as the smallest visual speech elements in the visual domain and they have been widely applied to model the visual speech, but it is worth noting that they are problematic when applied to the continuous visual speech recognition. To circumvent the problems associated with standard visemes, we propose a new visual speech representation that includes not only the data associated with the articulation of the visemes but also the transitory information between consecutive visemes. To fully evaluate the appropriateness of the proposed visual speech representation, in this paper an extensive set of experiments have been conducted to analyse the performance of the visual speech units when compared with that offered by the standard MPEG-4 visemes. The experimental results indicate that the developed VSR application achieved up to 90% correct recognition when the system has been applied to the identification of 60 classes of VSUs, while the recognition rate for the standard set of MPEG-4 visemes was only in the range 62-72%.
著者
-
Yu Dahai
Vision Systems Group, School of Electronic Engineering, Dublin City University
-
Ghita Ovidiu
Vision Systems Group, School of Electronic Engineering, Dublin City University
-
Sutherland Alistair
School of Computing, Dublin City University
-
Whelan Paul
Vision Systems Group, School of Electronic Engineering, Dublin City University