Regularization Strategies and Empirical Bayesian Learning for MKL
スポンサーリンク
概要
- 論文の詳細を見る
Multiple kernel learning (MKL) has received considerable attention recently. In this paper, we show how different MKL algorithms can be understood as applications of different types of regularization on the kernel weights. We show that many algorithms based on Ivanov regularization, have their corresponding Tikhonov regularization formulations. In addition, we show that the two regularization strategies are connected by the block-norm formulation. The Tikhonov-regularization-based formulation of MKL allows us to consider a generative probabilistic model behind MKL. Based on this model, we propose learning algorithms for the kernel weights through the maximization of marginalized likelihood.
- 2010-10-28
著者
-
Suzuki Taiji
Department Of Mathematical Informatics The University Of Tokyo
-
Suzuki Taiji
Department Of Mathematical Informatics Graduate School Of Information Science And Technology Univers
-
TOMIOKA Ryota
Department of Mathematical Informatics, The University of Tokyo
-
Tomioka Ryota
Univ. Tokyo Tokyo Jpn
-
Tomioka Ryota
Department Of Computer Science Tokyo Institute Of Technology
-
Suzuki Taiji
Univ. Tokyo Tokyo Jpn
-
冨岡 亮太
Department of Mathematical Informatics, The University of Tokyo
関連論文
- GAME-THEORETIC DERIVATION OF DISCRETE DISTRIBUTIONS AND DISCRETE PRICING FORMULAS
- Regularization Strategies and Empirical Bayesian Learning for MKL
- Independent component analysis by direct density-ratio estimation (ニューロコンピューティング)
- Output Divergence Criterion for Active Learning in Collaborative Settings
- Output divergence criterion for active learning in collaborative settings (数理モデル化と問題解決・バイオ情報学)
- Relative Density-Ratio Estimation for Robust Distribution Comparison (情報論的学習理論と機械学習)
- Relative Density-Ratio Estimation for Robust Distribution Comparison
- Output Divergence Criterion for Active Learning in Collaborative Settings
- Output Divergence Criterion for Active Learning in Collaborative Settings