Least-squares policy iteration algorithms for robotics: Online, continuous, and automatic

Stefan R. Friedrich, Michael Schreibauer, Martin Buss

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Reinforcement learning (RL) is a general framework to acquire intelligent behavior by trial-and-error and many successful applications and impressive results have been reported in the field of robotics. In robot control problem settings, it is oftentimes characteristic that the algorithms have to learn online through interaction with the system while it is operating, and that both state and action spaces are continuous. Least-squares policy iteration (LSPI) based approaches are therefore particularly hard to employ in practice, and parameter tuning is a tedious and costly enterprise. In order to mitigate this problem, we derive an automatic online LSPI algorithm that operates over continuous action spaces and does not require an a-priori, hand-tuned value function approximation architecture. To this end, we first show how the kernel least-squares policy iteration algorithm can be modified to handle data online by recursive dictionary and learning update rules. Next, borrowing sparsification methods from kernel adaptive filtering, the continuous action-space approximation in the online least-squares policy iteration algorithm can be efficiently automated as well. We then propose a similarity-based information extrapolation for the recursive temporal difference update in order to perform the dictionary expansion step efficiently in both algorithms. The performance of the proposed algorithms is compared with respect to their batch or hand-tuned counterparts in a simulation study. The novel algorithms require less prior tuning and data is processed completely on the fly, yet the results indicate that similar performance can be obtained as by careful hand-tuning. Therefore, engineers from both robotics and AI can benefit from the proposed algorithms when an LSPI algorithm is faced with online data collection and tuning by experiment is costly.

Original languageEnglish
Pages (from-to)72-84
Number of pages13
JournalEngineering Applications of Artificial Intelligence
Volume83
DOIs
StatePublished - Aug 2019

Keywords

  • Continuous actions
  • Policy iteration
  • Reinforcement learning
  • Robotics
  • Sparsification

Fingerprint

Dive into the research topics of 'Least-squares policy iteration algorithms for robotics: Online, continuous, and automatic'. Together they form a unique fingerprint.

Cite this