Improved Sample Complexity Bounds for Parameter Estimation
スポンサーリンク
概要
- 論文の詳細を見る
Various authors have proposed probabilistic extensions of Valiant's PAC (Probably Approximately Correct)learning model in which the target to be learned is a (conditional) probability distribution. In this paper, we improve upon the best known upper bounds on the sample complexity of the parameter estimation part of the learning problem for distributions and stochastic rules over a finite domain with respect to the Kullback-Leibler divergence (KL-divergence). In particular, we improve the upper bound of order O(1/ε^2) due to Abe, Takeuchi, and Warmuth [2] to a bound of order O(1/ε). In obtaining our results, we made use of the properties of a specific estimator (slightly modified maximum likelihood estimator) with respect to the KL-divergence, while previously known upper bounds were obtained using the uniform convergence technique.
- 社団法人電子情報通信学会の論文
- 1995-05-25