Partial and Synchronized Caption Generation to Enhance the Listening Comprehension Skills of Second Language Learners
スポンサーリンク
概要
- 論文の詳細を見る
Captioning is widely used by second language learners as an assistive tool for listening. However, the use of captions often leads to word-by-word decoding and over-reliance on reading skill rather than improving listening skill. With the purpose of encouraging the learners to listen to the audio instead of merely reading the text, the study introduces a novel technique of captioning, partial and synchronized, as an alternative listening tool for language learners. Using TED talks as a medium for training listening skill, the system employs the ASR technology to synchronize the text to the speech. Then, the system uses the learner's proficiency level to generate partial captions based on three features that impair comprehension: speech rate, word frequency and specificity. To evaluate the system, the performance of Kyoto University students in two CALL classes was assessed by a listening comprehension test on TED talks under three conditions: no caption, full caption and the partial-and-synchronized caption. Results revealed that while reducing the textual density of captions to less than 30%, the proposed method realizes comprehension performance as well as full caption condition. Moreover, it performs better than other conditions for a new segment of the same video without any captions.
- 2014-05-15
著者
-
Tatsuya Kawahara
Kyoto University
-
Tatsuya Kawahara
Graduate School Of Informatics Kyoto University
-
Maryam Sadat
Graduate School of Informatics, Kyoto University
関連論文
- Construction of a Test Collection for Spoken Document Retrieval from Lecture Audio Data
- Joint Phrase Alignment and Extraction for Statistical Machine Translation
- Comparison of Discriminative Models for Lexicon Optimization for ASR of Agglutinative Language
- Partial and Synchronized Caption Generation to Enhance the Listening Comprehension Skills of Second Language Learners
- Partial and Synchronized Caption Generation to Enhance the Listening Comprehension Skills of Second Language Learners
- Classifier-based Data Selection for Lightly-Supervised Training of Acoustic Model for Lecture Transcription