Dynamic Sample Selection: Implementation
スポンサーリンク
概要
- 論文の詳細を見る
Computational expensiveness of the training techniques, due to the extensiveness of the data set, is among the most important factors in machine learning and neural networks. Oversized data set may cause rank-deficiencies of Jacobean matrix which plays essential role in training techniques. Then the training becomes not only computationally expensive but also ineffective. In [1] the authors introduced the theoretical grounds for dynamic sample selection having a potential of eliminating rank-deficiencies [2] . This study addresses the implementation issues of the dynamic sample selection based on the theoretical material presented in [1] . The authors propose a sample selection algorithm implementable into an arbitrary optimization technique. An ability of the algorithm to select a proper set of samples at each iteration of the training has been observed to be very beneficial as indicated by several experiments. Recently proposed approaches to sample selection [3] - [8] work reasonably well if pattern-weight ratio is close to 1. Small improvements can be detected also at the values of the pattern-weight ratio equal to 2 or 3. The dynamic sample selection approach, presented in this article, can increase the convergence speed of first order optimization techniques, used for training MLP networks, even at the value of the pattern-weight ratio (E-FP) as high as 15 and possible even more.
- 社団法人電子情報通信学会の論文
- 1998-09-25
著者
-
Usui Shiro
Deparment Of Information And Computer Sciences Toyohashi University Of Technology
-
Geczy Peter
Department Of Information And Computer Sciences Toyohashi University Of Technology
関連論文
- Mathematical model study of continuous transmitter release in the synaptic terminal of goldfish retinal bipolar cells
- ELECTRICAL INTERACTIONS BETWEEN THE SINOATRIAL NODE AND ATRIAL MUSCLE THROUGH AN EXTERNAL CIRCUIT
- INHOMOGENEOUS CELLULAR ACTIVATION TIME AND Vmax IN NORMAL MYOCARDIAL TISSUE UNDER ELECTRICAL FIELD STIMULATION WITH A LOW POTENTIAL GRADIENT
- On the Statistical Properties of Least Squares Estimators of Layered Neural Networks
- Upper bound of the expected training error of neural network regression for a Gaussian noise sequence
- Dynamic Sample Selection : Theory
- Desensitization of the GABA_A Receptor Shifts the Dynamic Range of Retinal Horizontal Cells Due to Light and Dark Adaptation
- Ionic current model of rabbit retinal horizontal cell
- Sample Selection Algorithm Utilizing Lipschitz Continuity Condition
- RULE EXTRACTION FROM TRAINED ARTIFICAL NEURAL NETWORKS
- Dynamic Sample Selection: Implementation
- PROBLEM OF RANK-DEFICIENCIES OF A JACOBEAN FOR A NEURAL NETWORK
- Color Discrimination Mechanisms Mediating Visual Search