Header logo is


2015


Exciting Engineered Passive Dynamics in a Bipedal Robot
Exciting Engineered Passive Dynamics in a Bipedal Robot

Renjewski, D., Spröwitz, A., Peekema, A., Jones, M., Hurst, J.

{IEEE Transactions on Robotics and Automation}, 31(5):1244-1251, IEEE, New York, NY, 2015 (article)

Abstract
A common approach in designing legged robots is to build fully actuated machines and control the machine dynamics entirely in soft- ware, carefully avoiding impacts and expending a lot of energy. However, these machines are outperformed by their human and animal counterparts. Animals achieve their impressive agility, efficiency, and robustness through a close integration of passive dynamics, implemented through mechanical components, and neural control. Robots can benefit from this same integrated approach, but a strong theoretical framework is required to design the passive dynamics of a machine and exploit them for control. For this framework, we use a bipedal spring–mass model, which has been shown to approximate the dynamics of human locomotion. This paper reports the first implementation of spring–mass walking on a bipedal robot. We present the use of template dynamics as a control objective exploiting the engineered passive spring–mass dynamics of the ATRIAS robot. The results highlight the benefits of combining passive dynamics with dynamics-based control and open up a library of spring–mass model-based control strategies for dynamic gait control of robots.

dlg

link (url) DOI Project Page [BibTex]

2015


link (url) DOI Project Page [BibTex]


no image
Model-Based Strategy Selection Learning

Lieder, F., Griffiths, T. L.

The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2015 (article)

Abstract
Humans possess a repertoire of decision strategies. This raises the question how we decide how to decide. Behavioral experiments suggest that the answer includes metacognitive reinforcement learning: rewards reinforce not only our behavior but also the cognitive processes that lead to it. Previous theories of strategy selection, namely SSL and RELACS, assumed that model-free reinforcement learning identifies the cognitive strategy that works best on average across all problems in the environment. Here we explore the alternative: model-based reinforcement learning about how the differential effectiveness of cognitive strategies depends on the features of individual problems. Our theory posits that people learn a predictive model of each strategy’s accuracy and execution time and choose strategies according to their predicted speed-accuracy tradeoff for the problem to be solved. We evaluate our theory against previous accounts by fitting published data on multi-attribute decision making, conducting a novel experiment, and demonstrating that our theory can account for people’s adaptive flexibility in risky choice. We find that while SSL and RELACS are sufficient to explain people’s ability to adapt to a homogeneous environment in which all decision problems are of the same type, model-based strategy selection learning can also explain people’s ability to adapt to heterogeneous environments and flexibly switch to a different decision-strategy when the situation changes.

re

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
The optimism bias may support rational action

Lieder, F., Goel, S., Kwan, R., Griffiths, T. L.

NIPS 2015 Workshop on Bounded Optimality and Rational Metareasoning, 2015 (article)

re

[BibTex]

[BibTex]


no image
Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic

Griffiths, T. L., Lieder, F., Goodman, N. D.

Topics in Cognitive Science, 7(2):217-229, Wiley, 2015 (article)

re

[BibTex]

[BibTex]

2012


no image
The Balancing Cube: A Dynamic Sculpture as Test Bed for Distributed Estimation and Control

Trimpe, S., D’Andrea, R.

IEEE Control Systems Magazine, 32(6):48-75, December 2012 (article)

am ics

DOI [BibTex]

2012


DOI [BibTex]


no image
Burn-in, bias, and the rationality of anchoring

Lieder, F., Griffiths, T. L., Goodman, N. D.

Advances in Neural Information Processing Systems 25, pages: 2699-2707, 2012 (article)

Abstract
Bayesian inference provides a unifying framework for addressing problems in machine learning, artificial intelligence, and robotics, as well as the problems facing the human mind. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of time-accuracy tradeoffs, but what is the optimal tradeoff? We investigate time-accuracy tradeoffs using the Metropolis-Hastings algorithm as a metaphor for the mind's inference algorithm(s). We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as burn-in. Therefore the strategy that is optimal subject to the mind's bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoring-and-adjustment heuristic. The model's quantitative predictions are tested against published data on anchoring in numerical estimation tasks. Our theoretical and empirical results suggest that the anchoring bias is consistent with approximate Bayesian inference.

re

link (url) [BibTex]

link (url) [BibTex]

2008


Learning to Move in Modular Robots using Central Pattern Generators and Online Optimization
Learning to Move in Modular Robots using Central Pattern Generators and Online Optimization

Spröwitz, A., Moeckel, R., Maye, J., Ijspeert, A. J.

The International Journal of Robotics Research, 27(3-4):423-443, 2008 (article)

Abstract
This article addresses the problem of how modular robotics systems, i.e. systems composed of multiple modules that can be configured into different robotic structures, can learn to locomote. In particular, we tackle the problems of online learning, that is, learning while moving, and the problem of dealing with unknown arbitrary robotic structures. We propose a framework for learning locomotion controllers based on two components: a central pattern generator (CPG) and a gradient-free optimization algorithm referred to as Powell's method. The CPG is implemented as a system of coupled nonlinear oscillators in our YaMoR modular robotic system, with one oscillator per module. The nonlinear oscillators are coupled together across modules using Bluetooth communication to obtain specific gaits, i.e. synchronized patterns of oscillations among modules. Online learning involves running the Powell optimization algorithm in parallel with the CPG model, with the speed of locomotion being the criterion to be optimized. Interesting aspects of the optimization include the fact that it is carried out online, the robots do not require stopping or resetting and it is fast. We present results showing the interesting properties of this framework for a modular robotic system. In particular, our CPG model can readily be implemented in a distributed system, it is computationally cheap, it exhibits limit cycle behavior (temporary perturbations are rapidly forgotten), it produces smooth trajectories even when control parameters are abruptly changed and it is robust against imperfect communication among modules. We also present results of learning to move with three different robot structures. Interesting locomotion modes are obtained after running the optimization for less than 60 minutes.

dlg

link (url) DOI [BibTex]

2008


link (url) DOI [BibTex]