Sound localization under conditions of covered ears on the horizontal plane
スポンサーリンク
概要
- 論文の詳細を見る
In this paper, we examine how covering one or both external ears affects sound localization on the horizontal plane. In our experiments, we covered subjects' pinnae and external auditory canals with headphones, earphones, and earplugs, and conducted sound localization tests. Stimuli were presented from 12 different directions, and 12 subjects participated in the sound localization tests. The results indicate that covering one or both ears decreased their sound localization performance. Front-back confusion rates increased, particularly when covering both outer ears with open-air headphones or covering one ear with an intraconcha-type earphone or an earplug. Furthermore, incorrect answer rates were high when the sound source and the occluded ear that had an intraconcha-type earphone or an earplug were on the same side. We consider that the factors that cause poor performance can be clarified by comparing these results with characteristics of head-related transfer function.
- 社団法人日本音響学会の論文
著者
-
TAKEDA Kazuya
Nagoya University
-
Takeda Kazuya
Nagoya Univ.
-
Takeda Kazuya
Graduate School Of Information Science Nagoya University
-
Takeda Kazuya
Nagoya Univ. Nagoya‐shi Jpn
-
Itou Katsunobu
Faculty Of Computer And Information Sciences Hosei University
-
Itou Katunobu
Faculty Of Engineering Tokyo Institute Of Technology
-
Takimoto Madoka
Graduate School Of Information Science Nagoya University
-
NISHINO Takanori
Center for Information Media Studies, Nagoya University
-
Nishino Takanori
Center For Information Media Studies Nagoya University
-
Takeda Kazuya
Graduate School Of Information Science At Nagoya University
-
Nishino Takanori
Mie Univ. Tsu‐shi Jpn
-
Itou Katunobu
Faculty of Computer and Information Sciences, Hosei University
-
Takeda Kazuya
Graduate School of Engineering, Nagoya University:Center for Integrated Acoustic Information Research, Nagoya University
-
TAKEDA Kazuya
Graduate School of Engineering, Nagoya University
関連論文
- Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria
- Acoustic Feature Transformation Based on Discriminant Analysis Preserving Local Structure for Speech Recognition
- AN INTEGRATED AUDIO-VISUAL VIEWER FOR A LARGE SCALE MULTIPOINT CAMERAS AND MICROPHONES(International Workshop on Advanced Image Technology 2007)
- CENSREC-1-C : An evaluation framework for voice activity detection under noisy environments
- Driver Identification Using Driving Behavior Signals(Human-computer Interaction)
- AN INTEGRATED AUDIO-VISUAL VIEWER FOR A LARGE SCALE MULTIPOINT CAMERAS AND MICROPHONES
- G_007 Arbitrary Listening-point Generation Using Acoustic Transfer Function Interpolation in A Large Microphone Array
- THE SUB-BAND SOUND WAVE RAY-SPACE REPRESENTATION(International Workshop on Advanced Image Technology 2006)
- A-16-24 3D Sound Wave Field Representation Based on Ray-Space Method(A-16. マルチメディア・仮想環境基礎, 基礎・境界)
- AURORA-2J: An Evaluation Framework for Japanese Noisy Speech Recognition(Speech Corpora and Related Topics, Corpus-Based Speech Technologies)
- Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
- Head-Related Transfer Function measurement in sagittal and frontal coordinates
- CENSREC-3: An Evaluation Framework for Japanese Speech Recognition in Real Car-Driving Environments(Speech and Hearing)
- Evaluation of HRTFs estimated using physical features
- Multiple Regression of Log Spectra for In-Car Speech Recognition Using Multiple Distributed Microphones(Feature Extraction and Acoustic Medelings, Corpus-Based Speech Technologies)
- Evaluation of Combinational Use of Discriminant Analysis-Based Acoustic Feature Transformation and Discriminative Training
- Acoustic Feature Transformation Based on Discriminant Analysis Preserving Local Structure for Speech Recognition
- Gamma Modeling of Speech Power and Its On-Line Estimation for Statistical Speech Enhancement(Speech Enhancement, Statistical Modeling for Speech Processing)
- Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation
- SNR and sub-band SNR estimation based on Gaussian mixture modeling in the log power domain with application for speech enhancements (第6回音声言語シンポジウム)
- SNR and sub-band SNR estimation based on Gaussian mixture modeling in the log power domain with application for speech enhancements (第6回音声言語シンポジウム)
- SNR and sub-band SNR estimation based on Gaussian mixture modeling in the log power domain with application for speech enhancements (第6回音声言語シンポジウム)
- Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria
- Driver's irritation detection using speech recognition results (音声・第10回音声言語シンポジウム)
- Driver's irritation detection using speech recognition results (音声言語情報処理)
- Driver's irritation detection using speech recognition results (言語理解とコミュニケーション・第10回音声言語シンポジウム)
- サブバンドに含まれる周波数成分の瞬時周波数に基づく推定
- Predicting the Degradation of Speech Recognition Performance from Sub-band Dynamic Ranges (特集 音声言語情報処理とその応用)
- An Acoustically Oriented Vocal-Tract Model
- Comparison of acoustic measures for evaluating speech recognition performance in an automobile
- Estimation of speaker and listener positions in a car using binaural signals
- System Design, Data Collection and Evaluation of a Speech Dialogue System (Special Issue on Speech and Discourse Processing in Dialogue Systems)
- Sound localization under conditions of covered ears on the horizontal plane
- Single-Channel Multiple Regression for In-Car Speech Enhancement
- Adaptive Nonlinear Regression Using Multiple Distributed Microphones for In-Car Speech Recognition(Speech Enhancement, Multi-channel Acoustic Signal Processing)
- Speech Recognition Using Finger Tapping Timings(Speech and Hearing)
- CIAIR In-Car Speech Corpus : Influence of Driving Status(Corpus-Based Speech Technologies)
- Construction and Evaluation of a Large In-Car Speech Corpus(Speech Corpora and Related Topics, Corpus-Based Speech Technologies)
- Blind Source Separation Using Dodecahedral Microphone Array under Reverberant Conditions
- FOREWORD : Spercial Section on Robust Speech Processing in Realistic Environments
- Method for determining sound localization by auditory masking
- Acoustic Model Training Using Pseudo-Speaker Features Generated by MLLR Transformations for Robust Speaker-Independent Speech Recognition
- CENSREC-4: An evaluation framework for distant-talking speech recognition in reverberant environments
- Classification of speech under stress by physical modeling
- A Graph-Based Spoken Dialog Strategy Utilizing Multiple Understanding Hypotheses
- Classification of speech under stress using physical features based on two-mass model
- Acoustic Model Training Using Pseudo-Speaker Features Generated by MLLR Transformations for Robust Speaker-Independent Speech Recognition