The Asymptotic Equipartition Property in Reinforcement Learning and its Relation to Return Maximization
スポンサーリンク
概要
- 論文の詳細を見る
We discuss an important property called the asymptotic equipartition property on empirical sequences in reinforcement learning. This states that the typical set of empirical sequences has probability nearly one, that all elements in the typical set are nearly equi-probable, and that the number of elements in the typical set is an exponential function of the sum of conditional entropies if the number of time steps is sufficiently large. The sum is referred to as stochastic complexity. Using the property we elucidate the fact that the return maximization depends on two factors, the stochastic complexity and a quantity depending on the parameters of environment. Here, the return maximization means that the best sequences in terms of expected return have probability one. We also examine the sensitivity of stochastic complexity, which is a qualitative guide in tuning the parameters of action-selection strategy, and show a sufficient condition for return maximization in probability.
- Elsevierの論文
Elsevier | 論文
- Design and operation of an air-conditioning system fueled by wood pellets
- A comparative study of Al and LiF:Al interfaces with poly (3-hexylthiophene) using bias dependent photoluminescence technique
- Evidence of photoluminescence quenching in poly(3-hexylthiophene-2,5-diyl) due to injected charge carriers
- Volume shrinkage dependence of ferromagnetic moment in lanthanide ferromagnets gadolinium, terbium, dysprosium, and holmium
- In vivo promoter analysis on refeeding response of hepatic sterol regulatory element-binding protein-1c expression