Reinforcement learning accelerated by using state transition model with robotic applications
スポンサーリンク
概要
- 論文の詳細を見る
This paper discusses a method to accelerate reinforcement learning. Firstly defined is a concept that reduces the state space conserving policy. An algorithm is then given that calculates the optimal cost-to-go and the optimal policy in the reduced space from those in the original space. Using the reduced state space, learning convergence is accelerated. Its usefulness for both DP (dynamic programing) iteration and Q-learning are compared through a maze example. The convergence of the optimal cost-to-go in the original state space needs approximately N or more times as long as that in the reduced state space, where N is a ratio of the state number of the original space to the reduced space. The acceleration effect for Q-learning is more remarkable than that for the DP iteration. The proposed technique is also applied to a robot manipulator working for a peg-in-hole task with geometric constraints. The state space reduction can be considered as a model of the change of observation, i.e., one of cognitive actions. The obtained results explain that the change of observation is reasonable in terms of learning efficiency.
- IEEEの論文
- 2004-09-00
IEEE | 論文
- Magnetic and Transport Properties of Nb/PdNi Bilayers
- Supersonic Ion Beam Driven by Permanent-Magnets-Induced Double Layer in an Expanding Plasma
- Surfactant Adsorption on Single-Crystal Silicon Surfaces in TMAH Solution: Orientation-Dependent Adsorption Detected by In Situ Infrared Spectroscopy
- Extended-range FMCW reflectometry using an optical loop with a frequency shifter
- Teachingless spray-painting of sculptured surface by an industrial robot