Publication
Reinforcement learning with a Gaussian mixture model
Conference Article
Conference
International Joint Conference on Neural Networks (IJCNN)
Edition
2010
Pages
3485-3492
Doc link
http://dx.doi.org/10.1109/IJCNN.2010.5596306
File
Abstract
Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.
Categories
generalisation (artificial intelligence), learning (artificial intelligence).
Scientific reference
A. Agostini and E. Celaya. Reinforcement learning with a Gaussian mixture model, 2010 International Joint Conference on Neural Networks, 2010, Barcelona, Spain, pp. 3485-3492, IEEE.
Follow us!