<PAPER>Probabilistic Memory Capacity of Recurrent Neural Networks
スポンサーリンク
概要
- 論文の詳細を見る
In this paper, probabilistic memory capacity of recurrent neural networks (RNNs) is investigated. This probabilistic capacity is determined uniquely if the network architecture and the number of patterns to be memorized are fixed. It is independent from a learning method and the network dynamics. It provides the upper bound of the memory capacity by any learning algorithms in memorizing random patterns. It is assumed that the network consists of N units, which take two states. Thus, the total number of patterns is the Nth power of 2. The probabilities are obtained by discriminations whether the connection weights, which can store random M patterns at equilibrium states, exist or not. A theoretical way for this purpose is derived, and actual calculation is executed by the Monte Carlo method. The probabilistic memory capacity is very important in applying the RNNs to real fields, and in evaluating goodness of learning algorithms.
- 神戸市立工業高等専門学校の論文
- 1997-02-28