Statistical Mechanics of On-line Node-perturbation Learning
スポンサーリンク
概要
- 論文の詳細を見る
Node-perturbation learning (NP-learning) is a kind of statistical gradient descent algorithm that estimates the gradient of an objective function through application of a small perturbation to the outputs of the network. It can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. In this paper, we show that node-perturbation learning can be formulated as on-line learning in a linear perceptron with noise, and we can derive the differential equations of order parameters and the generalization error in the same way as for the analysis of learning in a linear perceptron through statistical mechanical methods. From analytical results, we show that cross-talk noise, which originates in the error of the other outputs, increases the generalization error as the output number increases.
著者
-
Kazuyuki Hara
College of Industrial Technology, Nihon University
-
Kentaro Katahira
Japan Science Technology Agency, ERATO Okanoya Emotional Information Project|Brain Science Institute
-
Kazuo Okanoya
Brain Science Institute, RIKEN|Japan Science Technology Agency, ERATO Okanoya Emotional Information
-
Masato Okada
Graduate School of Frontier Science, The University of Tokyo|Brain Science Institute, RIKEN|Japan Sc
関連論文
- Statistical Mechanics of On-line Node-perturbation Learning
- Statistical Mechanics of On-Line Node-Perturbation Learning
- Statistical Mechanics of On-line Node-perturbation Learning
- Theoretical analysis of learning speed in gradient descent algorithm replacing derivative with constant
- Theoretical analysis of learning speed in gradient descent algorithm replacing derivative with constant