Empirical Inference Conference Paper 2009

Predictive Representations for Policy Gradient in POMDPs

PDF Web
no image
Empirical Inference

We consider the problem of estimating the policy gradient in Partially Observable Markov Decision Processes (POMDPs) with a special class of policies that are based on Predictive State Representations (PSRs). We compare PSR policies to Finite-State Controllers (FSCs), which are considered as a standard model for policy gradient methods in POMDPs. We present a general Actor- Critic algorithm for learning both FSCs and PSR policies. The critic part computes a value function that has as variables the parameters of the policy. These latter parameters are gradually updated to maximize the value function. We show that the value function is polynomial for both FSCs and PSR policies, with a potentially smaller degree in the case of PSR policies. Therefore, the value function of a PSR policy can have less local optima than the equivalent FSC, and consequently, the gradient algorithm is more likely to converge to a global optimal solution.

Author(s): Boularias, A. and Chaib-Draa, B.
Links:
Book Title: ICML 2009
Journal: Proceedings of the 26th International Conference on Machine Learning (ICML 2009)
Pages: 65-72
Year: 2009
Month: June
Day: 0
Editors: Danyluk, A. , L. Bottou, M. Littman
Publisher: ACM Press
Bibtex Type: Conference Paper (inproceedings)
Address: New York, NY, USA
DOI: 10.1145/1553374.1553383
Event Name: 26th International Conference on Machine Learning
Event Place: Montreal, Canada
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

BibTex

@inproceedings{6827,
  title = {Predictive Representations for Policy Gradient in POMDPs},
  journal = {Proceedings of the 26th International Conference on Machine Learning (ICML 2009)},
  booktitle = {ICML 2009},
  abstract = {We consider the problem of estimating
  the policy gradient in Partially Observable
  Markov Decision Processes (POMDPs) with
  a special class of policies that are based
  on Predictive State Representations (PSRs).
  We compare PSR policies to Finite-State
  Controllers (FSCs), which are considered as a
  standard model for policy gradient methods
  in POMDPs. We present a general Actor-
  Critic algorithm for learning both FSCs and
  PSR policies. The critic part computes a
  value function that has as variables the parameters
  of the policy. These latter parameters
  are gradually updated to maximize the value
  function. We show that the value function
  is polynomial for both FSCs and PSR policies,
  with a potentially smaller degree in the
  case of PSR policies. Therefore, the value
  function of a PSR policy can have less local
  optima than the equivalent FSC, and consequently,
  the gradient algorithm is more likely
  to converge to a global optimal solution.},
  pages = {65-72},
  editors = {Danyluk, A. , L. Bottou, M. Littman},
  publisher = {ACM Press},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {New York, NY, USA},
  month = jun,
  year = {2009},
  slug = {6827},
  author = {Boularias, A. and Chaib-Draa, B.},
  month_numeric = {6}
}