A Regularization Method for Neural Network Learning that Minimizes Estimation Error (Special Issue on Neurocomputing)
スポンサーリンク
概要
- 論文の詳細を見る
A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.
- 社団法人電子情報通信学会の論文
- 1994-04-25