Neural Network Multiprocessors Applied with Dynamically Reconfigurable Pipeline Architecture (Special Issue on Multimedia, Analog and Processing LSIs)
スポンサーリンク
概要
- 論文の詳細を見る
Processing elements (PEs) with a dynamically reconfigurable pipeline architecture allow the high-speed calculation of widely used neural model which is multi-layer perceptrons with the backpropagation (BP) learning rule. Its architecture that was proposed for a single chip is extended to multiprocessors' structure. Each PE holds an element of the synaptic weight matrix and the input vector. Multi-local buses, a swapping mechanism of the weight matrix and the input vector, and transfer commands between processor elements allow the implementation of neural networks larger than the physical PE array. Estimated peak performance by the measurement of single processor element is 21.2 MCPS in the evaluation phase and 8.0 MCUPS during the learning phase at a clock frequency of 50 MHz. In the model, multi-layer perceptrons with 768 neurons and 131072 synapses are trained by a BP learning rule. It corresponds to 1357 MCPS and 512 MCUPS with 64 processor elements and 32 neurons in each PE.
- 社団法人電子情報通信学会の論文
- 1994-12-25
著者
-
Morishita Takayuki
Faculty Of Computer Science And System Engineering Okayama Prefectural University
-
Teramoto Iwao
Faculty of Computer Science and System Engineering, Okayama Prefectural University
-
Teramoto I
Faculty Of Computer Science And System Engineering Okayama Prefectural University
関連論文
- A Digital Neural Network Coprocessor with a Dynamically Reconfigurable Pipeline Architecture (Special Issue on New Architecture LSIs)
- A BiCMOS Analog Neural Network with Dynamically Updated Weights
- Neural Network Multiprocessors Applied with Dynamically Reconfigurable Pipeline Architecture (Special Issue on Multimedia, Analog and Processing LSIs)