Intelligent Control Systems Conference Paper 2019

Trajectory-Based Off-Policy Deep Reinforcement Learning

arXiv PDF
Thumb ticker sm doerr andreas 09 croped
Intelligent Control Systems
Thumb ticker sm 2018 ac r7b9314 cut
Intelligent Control Systems
Coverimage1

Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.

Author(s): Andreas Doerr and Michael Volpp and Marc Toussaint and Sebastian Trimpe and Christian Daniel
Links:
Book Title: Proceedings of the International Conference on Machine Learning (ICML)
Year: 2019
Month: June
Bibtex Type: Conference Paper (inproceedings)
Event Name: International Conference on Machine Learning (ICML)
Event Place: Long Beach, CA, USA
State: Published
Electronic Archiving: grant_archive

BibTex

@inproceedings{doerr2019trajectory,
  title = {Trajectory-Based Off-Policy Deep Reinforcement Learning},
  booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
  abstract = {Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.},
  month = jun,
  year = {2019},
  slug = {doer2019trajectory},
  author = {Doerr, Andreas and Volpp, Michael and Toussaint, Marc and Trimpe, Sebastian and Daniel, Christian},
  month_numeric = {6}
}