HMM speech recognition using stochastic language models
スポンサーリンク
概要
- 論文の詳細を見る
One of the major reasons for using language models in speech recognition is to reduce the search space. Context-free grammars or finite state grammars are suitable for this purpose. However, these models ignore the stochastic characteristics of a language. In this paper, three stochastic language models are investigated. These models are 1) a trigram model of Japanese syllables, 2) a stochastic shift/reduce model in LR parsing, and 3) a trigram model of context-free rewriting rules. These stochastic language models are incorporated into the syntax-directed HMM-based speech recognition system, and tested by phrase recognition experiments. The phrase recognition rate is improved from 88. 2% to 93. 2%.
- 社団法人日本音響学会の論文
著者
-
Kita Kenji
ATR Interpreting Telephony Research Laboratories
-
Kawabata T
Tokyo Univ. Sci. Noda‐shi Jpn
-
Kawabata Takeshi
ATR Interpreting Telephony Research Laboratories
-
Hanazawa Toshiyuki
ATR Interpreting Telephony Research Laboratories
関連論文
- LR Parsing with a Category Reachability Test Applied to Speech Recognition (Special Issue on Speech and Discourse Processing in Dialogue Systems)
- HMM speech recognition using stochastic language models
- Three Different LR Parsing Algorithms for Phoneme-Context-Dependent HMM-Based Continuous Speech Recognition (Special Issue on Speech and Discourse Processing in Dialogue Systems)