Publication

Exploiting domain symmetries in reinforcement learning with continuous state and action spaces

Conference Article

Conference

IEEE International Conference on Machine Learning and Applications (ICMLA)

Edition

8th

Pages

331-336

Doc link

http://dx.doi.org/10.1109/ICMLA.2009.41

File

Download the digital copy of the doc pdf document

Abstract

A central problem in Reinforcement Learning is how to deal with large state and action spaces. When the problem domain presents intrinsic symmetries, exploiting them can be key to achieve good performance. We analyze the gains that can be effectively achieved by exploiting different kinds of symmetries, and the effect of combining them, in a test case: the stand-up and stabilization of an inverted pendulum.

Categories

intelligent robots, learning (artificial intelligence).

Author keywords

domain symmetries, temporal symmetry, reinforcement learning, continuous state-action spaces

Scientific reference

A. Agostini and E. Celaya. Exploiting domain symmetries in reinforcement learning with continuous state and action spaces, 8th IEEE International Conference on Machine Learning and Applications, 2009, Miami, Florida, pp. 331-336.