Non-sparse Feature Mixing in Object Classification
スポンサーリンク
概要
- 論文の詳細を見る
Recent research has shown that combining various image features significantly improves the object classification performance. Multiple kernel learning (MKL) approaches, where the mixing weights at the kernel level are optimized simultaneously with the classifier parameters, give a well founded framework to control the importance of each feature. As alternatives, we can also use boosting approaches, where single kernel classifier outputs are combined with the optimal mixing weights. Most of those approaches employ an ?1-regularization on the mixing weights that promote sparse solutions. Although sparsity offers several advantages, e.g., interpretability and less calculation time in test phase, the accuracy of sparse methods is often even worse than the simplest flat weights combination. In this paper, we compare the accuracy of our recently developed non-sparse methods with the standard sparse counterparts on the PASCAL VOC 2008 data set.
- 一般社団法人情報処理学会の論文
- 2009-11-19
著者
-
Shinichi Nakajima
Optical Research Laboratory Nikon Corporation
-
Motoaki Kawanabe
Fraunhofer Institute First
-
Klaus-robert Muller
Technische Universitat Berlin
-
Shinichi Nakajima
Nikon Corporation
-
Alexander Binder
Fraunhofer Institute FIRST
-
Ulf Brefeld
Yahoo! Research
関連論文
- F-number control for Accurate Depth Estimation with Color-Filtered Aperture
- Non-sparse Feature Mixing in Object Classification
- Exhaustive Search of Feature Subsets for Support Vector Machine Classification