Image Associative Memory by Recurrent Neural Sub-networks (Special Section on Nonlinear Theory and its Applications)
スポンサーリンク
概要
- 論文の詳細を見る
Gray scale images are represented by recurrent neural subnetworks which together with a competition layer create an associative memory. The single recurrent subnetwork N_i implements a stochastic nonlinear fractal operator F_i, con-structed for the given image F_i. We show that under realistic assumptions F has a unique attractor which is located in the vicinity of the original image. Therefore one subnetwork represents one original image. The associative recall is implemented in two stages. Firstly, the competition layer finds the most invariant subnetwork for the given input noisy image g. Next, the selected recurrent subnetwork in few (5-10) global iterations produces high quality approximation of the original image. The degree of invariance for the subnetwork N_I on the input g is measured by a norm ||g - F_i(g)||. We have experimentally verified that associative recall for images of natural scenes with pixel values in [0,2551 is successful even when Gaussian noise has the standard deviation J as large as 500. Moreover, the norm, computed only on l0% of pixels chosen randomly from images still successfuly recalls a close approximation of original image . Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. However, for each new stored image a new subnetwork must be added.
- 社団法人電子情報通信学会の論文
- 1996-10-25
著者
-
Cichocki Andrzej
Frp Riken Abs Laboratory
-
SKARBEK Wladyslaw
FRP RIKEN, ABS Laboratory
-
Skarbek Wladyslaw
Frp Riken Abs Laboratory