Analytic Optimization of Shrinkage Parameters Based on Regularized Subspace Information Criterion(Neural Networks and Bioengineering)
スポンサーリンク
概要
- 論文の詳細を見る
For obtaining a higher level of generalization capability in supervised learning, model parameters should be optimized, i.e., they should be determined in such a way that the generalization error is minimized. However, since the generalization error is inaccessible in practice, model parameters are usually determined in such a way that an estimate of the generalization error is minimized. A standard procedure for model parameter optimization is to first prepare a finite set of candidates of model parameter values, estimate the generalization error for each candidate, and then choose the best one from the candidates. If the number of candidates is increased in this procedure, the optimization quality may be improved. However, this in turn increases the computational cost. In this paper, we give methods for analytically finding the optimal model parameter value from a set of infinitely many candidates. This maximally enhances the optimization quality while the computational cost is kept reasonable.
- 社団法人電子情報通信学会の論文
- 2006-08-01
著者
-
SUGIYAMA Masashi
Department of Computer Science, Tokyo Institute of Technology
-
SAKURAI Keisuke
Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology
-
Sakurai Keisuke
Department Of Computational Intelligence And Systems Science Tokyo Institute Of Technology
-
Sakurai Keisuke
Department Of Biophysics Graduate School Of Science Kyoto University:core Reserch For Evolutional Sc
-
Sugiyama Masashi
Department Of Computer Science Tokyo Institute Of Technology
-
Sugiyama Masashi
Department Of Chemistry Faculty Of Science Tokyo University Of Science
-
Sugiyama Masashi
Department of Applied Chemistry, Yamanashi University
関連論文
- Recent Advances and Trends in Large-Scale Kernel Methods
- Statistical active learning for efficient value function approximation in reinforcement learning (ニューロコンピューティング)
- Improving the Accuracy of Least-Squares Probabilistic Classifiers
- Improving the Accuracy of Least-Squares Probabilistic Classifiers
- Least Absolute Policy Iteration — A Robust Approach to Value Function Approximation
- 2P124 サルモネラ異物排出トランスポーターAcrAB発現制御因子RamRのX線結晶構造解析(核酸結合蛋白質,第48回日本生物物理学会年会)
- A New Meta-Criterion for Regularized Subspace Information Criterion
- Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise
- A new algorithm of non-Gaussian component analysis with radial kernel functions (Special issue: Information geometry and its applications)
- Methods of cross-domain object matching (情報論的学習理論と機械学習)
- Multi-task learning with least-squares probabilistic classifiers (パターン認識・メディア理解)
- Multi-task learning with least-squares probabilistic classifiers (情報論的学習理論と機械学習)
- Adaptive importance sampling with automatic model selection in value function approximation (ニューロコンピューティング)
- Analytic Optimization of Adaptive Ridge Parameters Based on Regularized Subspace Information Criterion(Neural Networks and Bioengineering)
- Adaptive Ridge Learning in Kernel Eigenspace and Its Model Selection
- On Computational Issues of Semi-Supervised Local Fisher Discriminant Analysis
- Recent Advances and Trends in Large-Scale Kernel Methods
- THE SINGLE CELL RECORDINGS OF CONE PIGMENT KNOCK-IN MICE(Physiology,Abstracts of papers presented at the 76^ Annual Meeting of the Zoological Society of Japan)
- PREPARATION OF A MOUSE MODEL HAVING GREEN-SENSITIVE CONE VISUAL PIGMENTS IN ROD PHOTORECEPTOR CELLS(Physiology,Abstracts of papers presented at the 74^ Annual Meeting of the Zoological Society of Japan)
- Syntheses of New Artificial Zinc Finger Proteins Containing Trisbipyridine-ruthenium Amino Acid at The N-or C-terminus as Fluorescent Probes
- Analytic Optimization of Shrinkage Parameters Based on Regularized Subspace Information Criterion(Neural Networks and Bioengineering)
- Constructing Kernel Functions for Binary Regression(Pattern Recognition)
- Optimal design of regularization term and regularization parameter by subspace information criterion
- Information-maximization clustering: analytic solution and model selection (情報論的学習理論と機械学習)
- New feature selection method for reinforcement learning: conditional mutual information reveals implicit state-reward dependency (情報論的学習理論と機械学習)
- Least Absolute Policy Iteration-A Robust Approach to Value Function Approximation
- Independent component analysis by direct density-ratio estimation (ニューロコンピューティング)
- A New Meta-Criterion for Regularized Subspace Information Criterion(Pattern Recognition)
- Spectral Methods for Thesaurus Construction
- Adaptive importance sampling with automatic model selection in reward weighted regression (ニューロコンピューティング)
- SERAPH: semi-supervised metric learning paradigm with hyper sparsity (情報論的学習理論と機械学習)
- Analysis and improvement of policy gradient estimation (情報論的学習理論と機械学習)
- Direct density-ratio estimation with dimensionality reduction via hetero-distributional subspace analysis (情報論的学習理論と機械学習)
- Output divergence criterion for active learning in collaborative settings (数理モデル化と問題解決・バイオ情報学)
- Estimation of squared-loss mutual information from paired and unpaired samples (情報論的学習理論と機械学習)
- Dependence minimizing regression with model selection for non-linear causal inference under non-Gaussian noise (情報論的学習理論と機械学習)
- Canonical dependency analysis based on squared-loss mutual information (情報論的学習理論と機械学習)
- Artist agent A[2]: stroke painterly rendering based on reinforcement learning (パターン認識・メディア理解)
- Artist agent A[2]: stroke painterly rendering based on reinforcement learning (情報論的学習理論と機械学習)
- Generalization Error Estimation for Non-linear Learning Methods(Neural Networks and Bioengineering)
- Improving Precision of the Subspace Information Criterion(Neural Networks and Bioengineering)
- Canonical dependency analysis based on squared-loss mutual information (パターン認識・メディア理解)
- Change-Point Detection in Time-Series Data by Relative Density-Ratio Estimation (情報論的学習理論と機械学習)
- Modified Newton Approach to Policy Search (情報論的学習理論と機械学習)
- Computationally Efficient Multi-Label Classification by Least-Squares Probabilistic Classifier (情報論的学習理論と機械学習)
- Relative Density-Ratio Estimation for Robust Distribution Comparison (情報論的学習理論と機械学習)
- Change-Point Detection in Time-Series Data by Relative Density-Ratio Estimation
- Modified Newton Approach to Policy Search
- Squared-loss Mutual Information Regularization
- Computationally Efficient Multi-Label Classification by Least-Squares Probabilistic Classifier
- Feature Selection via l_1-Penalized Squared-Loss Mutual Information
- Semi-Supervised Learning of Class Balance under Class-Prior Change by Distribution Matching (情報論的学習理論と機械学習)
- Relative Density-Ratio Estimation for Robust Distribution Comparison
- Winning the Kaggle Algorithmic Trading Challenge with the Composition of Many Models and Feature Engineering (情報論的学習理論と機械学習)
- Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation
- Early Stopping Heuristics in Pool-Based Incremental Active Learning for Least-Squares Probabilistic Classifier (情報論的学習理論と機械学習)
- Efficient Sample Reuse in Policy Gradients with Parameter-based Exploration (情報論的学習理論と機械学習)
- Output Divergence Criterion for Active Learning in Collaborative Settings
- Output Divergence Criterion for Active Learning in Collaborative Settings
- Photochromism of benzylviologens containing methyl groups on pyridinium rings and embedded in solid poly(N-vinyl-2-pyrrolidone) matrix.
- Clustering Unclustered Data : Unsupervised Binary Labeling of Two Datasets Having Different Class Balances
- Direct Approximation of Quadratic Mutual Information and Its Application to Dependence-Maximization Clustering
- Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation
- Squared-loss Mutual Information Regularization
- Early Stopping Heuristics in Pool-Based Incremental Active Learning for Least-Squares Probabilistic Classifier
- Winning the Kaggle Algorithmic Trading Challenge with the Composition of Many Models and Feature Engineering
- Improving Importance Estimation in Pool-based Batch Active Learning for Approximate Linear Regression