A Sparse Memory Access Architecture for Digital Neural Network LSIs (Special Issue on New Concept Device and Novel Architecture LSIs)
スポンサーリンク
概要
- 論文の詳細を見る
A sparse memory access architecture which is proposed to achieve a high-computational-speed neural-network LSI is described in detail. This architecture uses two key techniques, compressible synapse-weight neuron calculation and differential neuron operation, to reduce the number of accesses to synapse weight memories and the number of neuron calculations without incurring an accuracy penalty. The test chip based on this architecture has 96 parallel data-driven processing units and enough memory for 12,288 synapse weights. In a pattern recognition example, the number of memory accesses and neuron calculations was reduced to 0.87% that needed in the conventional method and the practical performance was 18 GCPS. The sparse memory access architecture is also effective when the synapse weights are stored in off-chip memory.
- 社団法人電子情報通信学会の論文
- 1997-07-25
著者
-
FUJITA Osamu
NTT Electronics Technology Corporation
-
UCHIMURA Kuniharu
NTT System Electronics Laboratories
-
Uchimura Keiichi
The Department Of Electrical And Computer Science Kumamoto University
-
Uchimura K
Ntt System Electronics Laboratories
-
AIHARA Kimihisa
NTT Network Service System Laboratories
-
Fujita Osamu
Ntt Electronics Technology
関連論文
- Self-Learning Analog Neural Network LSI with High-Resolution Non-Volatile Analog Memory and a Partially-Serial Weight-Update Architecture (Special Issue on New Concept Device and Novel Architecture LSIs)
- Electromagnetic Radiation Noise from Surface Gas Discharges : Mechanisms of Propagation, Coupling and Formation (Special Issue on Discharge and Electromagnetic Interference)
- A Sparse Memory Access Architecture for Digital Neural Network LSIs (Special Issue on New Concept Device and Novel Architecture LSIs)