Reference:
L. Busoniu,
D. Ernst,
B. De Schutter, and
R. Babuska,
"Approximate reinforcement learning: An overview," Proceedings of
the 2011 IEEE Symposium on Adaptive Dynamic Programming and
Reinforcement Learning (ADPRL 2011), Paris, France, pp. 1-8, Apr.
2011.
Abstract:
Reinforcement learning (RL) allows agents to learn how to optimally
interact with complex environments. Fueled by recent advances in
approximation-based algorithms, RL has obtained impressive successes
in robotics, artificial intelligence, control, operations research,
etc. However, the scarcity of survey papers about approximate RL makes
it difficult for newcomers to grasp this intricate field. With the
present overview, we take a step toward alleviating this situation. We
review methods for approximate RL, starting from their dynamic
programming roots and organizing them into three major classes:
approximate value iteration, policy iteration, and policy search. Each
class is subdivided into representative categories, highlighting among
others offline and online algorithms, policy gradient methods, and
simulation-based techniques. We also compare the different categories
of methods, and outline possible ways to enhance the reviewed
algorithms.