Header logo is


2015


no image
Model-Based Strategy Selection Learning

Lieder, F., Griffiths, T. L.

The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2015 (article)

Abstract
Humans possess a repertoire of decision strategies. This raises the question how we decide how to decide. Behavioral experiments suggest that the answer includes metacognitive reinforcement learning: rewards reinforce not only our behavior but also the cognitive processes that lead to it. Previous theories of strategy selection, namely SSL and RELACS, assumed that model-free reinforcement learning identifies the cognitive strategy that works best on average across all problems in the environment. Here we explore the alternative: model-based reinforcement learning about how the differential effectiveness of cognitive strategies depends on the features of individual problems. Our theory posits that people learn a predictive model of each strategy’s accuracy and execution time and choose strategies according to their predicted speed-accuracy tradeoff for the problem to be solved. We evaluate our theory against previous accounts by fitting published data on multi-attribute decision making, conducting a novel experiment, and demonstrating that our theory can account for people’s adaptive flexibility in risky choice. We find that while SSL and RELACS are sufficient to explain people’s ability to adapt to a homogeneous environment in which all decision problems are of the same type, model-based strategy selection learning can also explain people’s ability to adapt to heterogeneous environments and flexibly switch to a different decision-strategy when the situation changes.

re

link (url) Project Page [BibTex]

2015


link (url) Project Page [BibTex]


no image
The optimism bias may support rational action

Lieder, F., Goel, S., Kwan, R., Griffiths, T. L.

NIPS 2015 Workshop on Bounded Optimality and Rational Metareasoning, 2015 (article)

re

[BibTex]

[BibTex]


no image
Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic

Griffiths, T. L., Lieder, F., Goodman, N. D.

Topics in Cognitive Science, 7(2):217-229, Wiley, 2015 (article)

re

[BibTex]

[BibTex]


no image
Novel plasticity rule can explain the development of sensorimotor intelligence

Der, R., Martius, G.

Proceedings of the National Academy of Sciences, 112(45):E6224-E6232, 2015 (article)

Abstract
Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher-level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive rhythmic behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system-specific modifications of the DEP rule. They rather arise from the underlying mechanism of spontaneous symmetry breaking, which is due to the tight brain body environment coupling. The new synaptic rule is biologically plausible and would be an interesting target for neurobiological investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.

al

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Quantifying Emergent Behavior of Autonomous Robots

Martius, G., Olbrich, E.

Entropy, 17(10):7266, 2015 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers

Rolinek, M., Swoboda, P., Zietlow, D., Paulus, A., Musil, V., Martius, G.

Arxiv (article)

Abstract
Building on recent progress at the intersection of combinatorial optimization and deep learning, we propose an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers. Using the presence of heavily optimized combinatorial solvers together with some improvements in architecture design, we advance state-of-the-art on deep graph matching benchmarks for keypoint correspondence. In addition, we highlight the conceptual advantages of incorporating solvers into deep learning architectures, such as the possibility of post-processing with a strong multi-graph matching solver or the indifference to changes in the training setting. Finally, we propose two new challenging experimental setups

al

Arxiv [BibTex]