Classification of Video Shots Based on Human Affect
スポンサーリンク
概要
- 論文の詳細を見る
This study addresses the challenge of analyzing affective video content. The affective content of a given video is defined as the intensity and the type of emotion that arise in a viewer while watching that video. In this study, human emotion was monitored by capturing viewers' pupil sizes and gazing points while they were watching the video. On the basis of the measurement values, four features were extracted (namely cumulative pupil response (CPR), frequency component (FC), modified bivariate contour ellipse area (mBVCEA) and Gini coefficient). Using principal component analysis, we have found that two key features, namely the CPR and FC, contribute to the majority of variance in the data. By utilizing the key features, the affective content was identified and could be used in classifying the video shots into their respective scenes. An average classification accuracy of 71.89% was achieved for three basic emotions, with the individual maximum classification accuracy at 89.06%. The development in this study serves as the first step in automating personalized video content analysis on the basis of human emotion.
著者
-
Ong Kok-meng
Graduate School Of Global Information And Telecommunication Studies Waseda University
-
Kameyama Wataru
Graduate School Of Global Information And Telecommunication Studies (gits) Waseda University
-
Ong Kok-Meng
Graduate School of Global Information and Telecommunication Studies, Waseda University
関連論文
- 人間の感情に基づくビデオショットの分類
- D-11-103 Holistic Image Features Extraction for Better Image Annotation
- Special Section on Mobile Multimedia Communications
- D-21 An Adaptive Metadata Filtering Method for Audio-visual Contents Search and Retrieval using User's Viewing History
- Information Distribution Analysis Based on Human's Behavior State Model and the Small-World Network
- D-12-38 RAW RELATIONSHIPS IN AESTHETIC PHOTOS BY OPTICAL FEATURES
- A-15-20 On Consideration of Affective Understanding in Video : A Personalized Approach
- BS-3-30 On Employing Content-aware Retargeted Image in Automatic Image Annotation(BS-3. Management and Control Technologies for Innovative Networks)
- Classification of Video Shots Based on Human Affect