An interpretable neural network ensemble (数理モデル化と問題解決)
スポンサーリンク
概要
- 論文の詳細を見る
The objective of this study is to build a model of neural net-work classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.
- 一般社団法人情報処理学会の論文
- 2007-05-17
著者
-
Hartono Pitoyo
Future University-hakodate
-
Hashimoto Shuji
Waseda Univ.
-
Hartono Pitoyo
Future Univ.‐hakodate Hokkaido
-
Hashimoto Shuji
Waseda University
-
Hartono Pitoyo
Future Univ.‐hakodate
関連論文
- Multiple Signal Classification by Aggregated Microphones(Microphone Array, Multi-channel Acoustic Signal Processing)
- Self-Oscillating Gel Actuator for Chemical Robotics
- Edge Field Analysis(Advanced Image Technology)
- An interpretable neural network ensemble (数理モデル化と問題解決)
- 2J1548 心筋を模倣した自励振動ゲルによるソフトロボットの創製(計測、その他,第49回日本生物物理学会年会)
- Curve Detection Using Inverse Hough Transform as a Parameter Optimization Problem (信号処理・画像処理技術とその応用)
- 2A1-A09 Switching Control of Mobile Reaction Wheel Pendulum