Direct Importance Estimation with Gaussian Mixture Models
スポンサーリンク
概要
- 論文の詳細を見る
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
- 2009-10-01
著者
-
Sugiyama Masashi
Tokyo Inst. Of Technol.
-
Sugiyama Masashi
Tokyo Institute Of Technology
-
YAMADA Makoto
Tokyo Institute of Technology
-
Yamada Makoto
Tokyo Inst. Of Technol.
関連論文
- Statistical active learning for efficient value function approximation in reinforcement learning (ニューロコンピューティング)
- Lighting Condition Adaptation for Perceived Age Estimation
- Computationally Efficient Multi-task Learning with Least-squares Probabilistic Classifiers
- Improving the Accuracy of Least-Squares Probabilistic Classifiers
- Improving the Accuracy of Least-Squares Probabilistic Classifiers
- A Unified Framework of Density Ratio Estimation under Bregman Divergence
- Adaptive importance sampling with automatic model selection in value function approximation (ニューロコンピューティング)
- Improving Model-based Reinforcement Learning with Multitask Learning
- Improving Model-based Reinforcement Learning with Multitask Learning
- Least-Squares Conditional Density Estimation
- Direct Importance Estimation with a Mixture of Probabilistic Principal Component Analyzers
- カーネル密度比推定の統計的解析(学習問題の解析,テキスト・Webマイニング,一般)
- A Semi-Supervised Approach to Perceived Age Prediction from Face Images
- Information-maximization clustering: analytic solution and model selection (情報論的学習理論と機械学習)
- Conditional Density Estimation Based on Density Ratio Estimation
- Conditional Density Estimation Based on Density Ratio Estimation
- A density ratio approach to two-sample test (パターン認識・メディア理解)
- A density ratio approach to two-sample test (情報論的学習理論と機械学習)
- Theoretical Analysis of Density Ratio Estimation
- FOREWORD
- Superfast-Trainable Multi-Class Probabilistic Classifier by Least-Squares Posterior Fitting
- Direct Importance Estimation with Gaussian Mixture Models
- Dependence minimizing regression with model selection for non-linear causal inference under non-Gaussian noise (情報論的学習理論と機械学習)
- Improving the Accuracy of Least-Squares Probabilistic Classifiers
- Artist agent A[2]: stroke painterly rendering based on reinforcement learning (パターン認識・メディア理解)
- Artist agent A[2]: stroke painterly rendering based on reinforcement learning (情報論的学習理論と機械学習)
- Least-Squares Independence Test
- Relative Density-Ratio Estimation for Robust Distribution Comparison (情報論的学習理論と機械学習)
- Density Difference Estimation
- Winning the Kaggle Algorithmic Trading Challenge with the Composition of Many Models and Feature Engineering
- Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting
- Early stopping Heuristics in Pool-Based Incremental Active Learning for Least-Squares Probabilistic Classifier
- Computationally Efficient Multi-Label Classification by Least-Squares Probabilistic Classifiers
- Multi-Task Approach to Reinforcement Learning for Factored-State Markov Decision Problems
- Constrained Least-Squares Density-Difference Estimation
- A Density-ratio Framework for Statistical Data Processing
- Computationally Efficient Multi-task Learning with Least-squares Probabilistic Classifiers
- Model-Based Policy Gradients with Parameter-Based Exploration by Least-Squares Conditional Density Estimation
- A Density-ratio Framework for Statistical Data Processing
- FOREWORD
- On Kernel Parameter Selection in Hilbert-Schmidt Independence Criterion