Speaker Recognition by Combining MFCC and Phase Information in Noisy Conditions
スポンサーリンク
概要
- 論文の詳細を見る
In this paper, we investigate the effectiveness of phase for speaker recognition in noisy conditions and combine the phase information with mel-frequency cepstral coefficients (MFCCs). To date, almost speaker recognition methods are based on MFCCs even in noisy conditions. For MFCCs which dominantly capture vocal tract information, only the magnitude of the Fourier Transform of time-domain speech frames is used and phase information has been ignored. High complement of the phase information and MFCCs is expected because the phase information includes rich voice source information. Furthermore, some researches have reported that phase based feature was robust to noise. In our previous study, a phase information extraction method that normalizes the change variation in the phase depending on the clipping position of the input speech was proposed, and the performance of the combination of the phase information and MFCCs was remarkably better than that of MFCCs. In this paper, we evaluate the robustness of the proposed phase information for speaker identification in noisy conditions. Spectral subtraction, a method skipping frames with low energy/Signal-to-Noise (SN) and noisy speech training models are used to analyze the effect of the phase information and MFCCs in noisy conditions. The NTT database and the JNAS (Japanese Newspaper Article Sentences) database added with stationary/non-stationary noise were used to evaluate our proposed method. MFCCs outperformed the phase information for clean speech. On the other hand, the degradation of the phase information was significantly smaller than that of MFCCs for noisy speech. The individual result of the phase information was even better than that of MFCCs in many cases by clean speech training models. By deleting unreliable frames (frames having low energy/SN), the speaker identification performance was improved significantly. By integrating the phase information with MFCCs, the speaker identification error reduction rate was about 30%-60% compared with the standard MFCC-based method.
- (社)電子情報通信学会の論文
- 2010-09-01
著者
-
山本 一公
豊橋技術科学大学
-
Nakagawa Seiichi
Toyohashi Univ. Technol. Toyohashi‐shi Jpn
-
Nakagawa Seiichi
Toyohashi Univ. Of Technol. Toyohashi‐shi Jpn
-
YAMAMOTO Kazumasa
Toyohashi University of Technology
-
Yamamoto Kazumasa
Faculty Of Engineering Shinshu University
-
Yamamoto Kazumasa
Graduate School Of Science And Technology Shinshu University
-
Yamamoto K
Toyohashi University Of Technology
-
WANG Longbiao
Shizuoka University
-
MINAMI Kazue
Toyohashi University of Technology
-
Wang Longbiao
Shizuoka Univ. Hamamatsu‐shi Jpn
関連論文
- 長時間分析に基づく位相情報を用いた音声認識の検討 (音声)
- 雑音下音声認識評価ワーキンググループ活動報告 : 認識に影響する要因の個別評価環境(3)(SIG-SLP内組織の活動報告)
- 雑音下音声認識評価ワーキンググループ活動報告 : 認識に影響する要因の個別評価環境(2)(雑音・VAD,第9回音声言語シンポジウム)
- 雑音下音声認識評価ワーキンググループ活動報告 : 認識に影響する要因の個別評価環境 (2)(雑音・VAD,第9回音声言語シンポジウム)
- 雑音下音声認識評価ワーキンググループ活動報告 : 認識に影響する要因の個別評価環境(第8回音声言語シンポジウム)
- 雑音下音声認識評価ワーキンググループ活動報告 : 認識に影響する要因の個別評価環境(Session-1 検出,第8回音声言語シンポジウム)
- CENSREC-1-C : 雑音下音声区間検出評価基盤の構築
- SLP雑音下音声認識評価WG活動報告 : 評価用データと評価手法について(Session-6 スペシャルセッション: 共通コーパスを利用した耐雑音技術評価, 第7回音声言語シンポジウム)
- 実走行車内音声認識の評価データベースCENSREC-3とその共通評価ベースライン
- 実走行車内単語音声データベースCENSREC-3と共通評価環境の構築