Neural Learning of Chaotic System Behavior (Special Section on Nonlinear Theory and Its Applications)
スポンサーリンク
概要
- 論文の詳細を見る
We introduce recurrent networks that are able to learn chaotic maps, and investigate whether the neural models also capture the dynamical invariants (Correlation Dimension, largest Lyapunov exponent) of chaotic time series. We show that the dynamical invariants can be learned already by feedforward neural networks, but that recurrent learning improves the dynamical modeling of the time series. We discover a novel type of overtraining which corresponds to the forgetting of the largest Lyapunov exponent during learning and call this phenomenon dynamical overtraining. Furthermore, we introduce a penalty term that involves a dynamical invariant of the network and avoids dynamical overtraining. As examples we use the Henon map, the logistic map and a real world chaotic series that corresponds to the concentration of one of the chemicals as a function of time in experiments on the Belousov-Zhabotinskii reaction in a well-stirred flow reactor.
- 社団法人電子情報通信学会の論文
- 1994-11-25
著者
-
Deco G
Siemens Ag Munich Deu
-
Deco Gustavo
Corporate Research and Development
-
Schurmann Bernd
Corporate Research and Development