Publication

Competitive function approximation for reinforcement learning

Technical Report (2014)

IRI code

IRI-TR-14-05

File

Download the digital copy of the doc pdf document

Authors

Abstract

The application of reinforcement learning to problems with continuous domains requires representing the value function by means of function approximation. We identify two aspects of reinforcement learning that make the function approximation process hard: non-stationarity of the target function and biased sampling. Non-stationarity is the result of the bootstrapping nature of dynamic programming where the value function is estimated using its current approximation. Biased sampling occurs when some regions of the state space are visited too often, causing a reiterated updating with similar values which fade out the occasional updates of infrequently sampled regions.

We propose a competitive approach for function approximation where many different local approximators are available at a given input and the one with expectedly best approximation is selected by means of a relevance function. The local nature of the approximators allows their fast adaptation to non-stationary changes and mitigates the biased sampling problem. The coexistence of multiple approximators updated and tried in parallel permits obtaining a good estimation much faster than would be possible with a single approximator. Experiments in different benchmark problems show that the competitive strategy provides a faster and more stable learning than non-competitive approaches.

Categories

learning (artificial intelligence).

Author keywords

reinforcement learning, competitive strategy, Gaussian mixture model

Scientific reference

A. Agostini and E. Celaya. Competitive function approximation for reinforcement learning. Technical Report IRI-TR-14-05, Institut de Robòtica i Informàtica Industrial, CSIC-UPC, 2014.