Reinforcement Learning with Orthonormal Basis Adaptation Based on Activity-Oriented Index Allocation
スポンサーリンク
概要
- 論文の詳細を見る
An orthonormal basis adaptation method for function approximation was developed and applied to reinforcement learning with multi-dimensional continuous state space. First, a basis used for linear function approximation of a control function is set to an orthonormal basis. Next, basis elements with small activities are replaced with other candidate elements as learning progresses. As this replacement is repeated, the number of basis elements with large activities increases. Example chaos control problems for multiple logistic maps were solved, demonstrating that the method for adapting an orthonormal basis can modify a basis while holding the orthonormality in accordance with changes in the environment to improve the performance of reinforcement learning and to eliminate the adverse effects of redundant noisy states.
- 電子情報通信学会の論文
- 2008-04-01
著者
関連論文
- Approximation and Analysis of Non-linear Equations in a Moment Vector Space(Nonlinear Problems)
- Analysis Based on Moment Vector Equation for Interacting Identical Elements with Nonlinear Dynamics
- Global Nonlinear Optimization Based on Wave Function and Wave Coefficient Equation
- A State Space Compression Method Based on Multivariate Analysis for Reinforcement Learning in High-dimensional Continuous State Spaces
- Reinforcement Learning with Orthonormal Basis Adaptation Based on Activity-Oriented Index Allocation