Back

Autonomous Learning Members Publications

Members

Empirical Inference, Autonomous Learning
Senior Research Scientist
Autonomous Learning
  • Doctoral Researcher
Autonomous Learning
  • Doctoral Researcher

Publications

Autonomous Learning Conference Paper Self-supervised Reinforcement Learning with Independently Controllable Subgoals Zadaianchuk, A., Martius, G., Yang, F. In Proceedings of the 5th Conference on Robot Learning (CoRL 2021) , 164:384-394, PMLR, 5th Conference on Robot Learning (CoRL 2021) , November 2022 (Published)
To successfully tackle challenging manipulation tasks, autonomous agents must learn a diverse set of skills and how to combine them. Recently, self-supervised agents that set their own abstract goals by exploiting the discovered structure in the environment were shown to perform well on many different tasks. In particular, some of them were applied to learn basic manipulation skills in compositional multi-object environments. However, these methods learn skills without taking the dependencies between objects into account. Thus, the learned skills are difficult to combine in realistic environments. We propose a novel self-supervised agent that estimates relations between environment components and uses them to independently control different parts of the environment state. In addition, the estimated relations between objects can be used to decompose a complex goal into a compatible sequence of subgoals. We show that, by using this framework, an agent can efficiently and automatically learn manipulation tasks in multi-object environments with different relations between objects.
Arxiv Openreview Poster URL BibTeX

Autonomous Learning Conference Paper Self-supervised Visual Reinforcement Learning with Object-centric Representations Zadaianchuk*, A., Seitzer*, M., Martius, G. In 9th International Conference on Learning Representations (ICLR 2021), May 2021, *equal contribution
Autonomous agents need large repertoires of skills to act reasonably on new tasks that they have not seen before. However, acquiring these skills using only a stream of high-dimensional, unstructured, and unlabeled observations is a tricky challenge for any autonomous agent. Previous methods have used variational autoencoders to encode a scene into a low-dimensional vector that can be used as a goal for an agent to discover new skills. Nevertheless, in compositional/multi-object environments it is difficult to disentangle all the factors of variation into such a fixed-length representation of the whole scene. We propose to use object-centric representations as a modular and structured observation space, which is learned with a compositional generative world model. We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills. These skills can be further combined to address compositional tasks like the manipulation of several different objects.
Arxiv Code Paper @ ICLR 2021 (spotlight video) OpenReview BibTeX