Autonomous Motion Conference Paper 2020

How to Train Your Differentiable Filter

pdf
Thumb ticker sm alina
Autonomous Motion
Thumb ticker sm georg 2018 crop small
Empirical Inference, Autonomous Learning
Senior Research Scientist
Thumb ticker sm me portrait
Autonomous Motion
  • Affiliated Researcher
State estimation

In many robotic applications, it is crucial to maintain a belief about the state of a system. These state estimates serve as input for planning and decision making and provide feedback during task execution. Recursive Bayesian Filtering algorithms address the state estimation problem, but they require models of process dynamics and sensory observations as well as noise characteristics of these models. Recently, multiple works have demonstrated that these models can be learned by end-to-end training through differentiable versions of Recursive Filtering algorithms.The aim of this work is to improve understanding and applicability of such differentiable filters (DF). We implement DFs with four different underlying filtering algorithms and compare them in extensive experiments. We find that long enough training sequences are crucial for DF performance and that modelling heteroscedastic observation noise significantly improves results. And while the different DFs perform similarly on our example task, we recommend the differentiable Extended Kalman Filter for getting started due to its simplicity.

Author(s): Alina Kloss, Georg Martius, Jeannette Bohg
Links:
Year: 2020
Month: July
Bibtex Type: Conference Paper (inproceedings)
Electronic Archiving: grant_archive
Attachments:

BibTex

@inproceedings{kloss_rss_ws_2020,
  title = {How to Train Your Differentiable Filter },
  abstract = {In many robotic applications, it is crucial to maintain a belief about the state of a system. These 
  state estimates serve as input for planning and decision making and provide feedback during task execution. Recursive Bayesian Filtering algorithms address the state estimation problem, but they require models of process dynamics and sensory observations as well as noise characteristics of these models. Recently, multiple works have demonstrated that these models can be learned by end-to-end training through differentiable versions of Recursive Filtering algorithms.The aim of this work is to improve understanding and applicability of such differentiable filters (DF). We implement DFs with four different underlying filtering algorithms and compare them in extensive experiments. 
  We find that long enough training sequences are crucial for DF performance and that modelling heteroscedastic observation noise significantly improves results. And while the different DFs perform similarly on our example task, we recommend the differentiable Extended Kalman Filter for getting 
  started due to its simplicity.},
  month = jul,
  year = {2020},
  slug = {kloss_rss_ws_2020},
  author = {Alina Kloss, Georg Martius, Jeannette Bohg},
  month_numeric = {7}
}