Header logo is


2015


no image
Probabilistic Interpretation of Linear Solvers

Hennig, P.

SIAM Journal on Optimization, 25(1):234-260, 2015 (article)

ei pn

Web PDF link (url) DOI [BibTex]

2015


Web PDF link (url) DOI [BibTex]


Sensory synergy as environmental input integration
Sensory synergy as environmental input integration

Alnajjar, F., Itkonen, M., Berenz, V., Tournier, M., Nagai, C., Shimoda, S.

Frontiers in Neuroscience, 8, pages: 436, 2015 (article)

Abstract
The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

am

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Active Reward Learning with a Novel Acquisition Function

Daniel, C., Kroemer, O., Viering, M., Metz, J., Peters, J.

Autonomous Robots, 39(3):389-405, 2015 (article)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning Movement Primitive Attractor Goals and Sequential Skills from Kinesthetic Demonstrations

Manschitz, S., Kober, J., Gienger, M., Peters, J.

Robotics and Autonomous Systems, 74, Part A, pages: 97-107, 2015 (article)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Bayesian Optimization for Learning Gaits under Uncertainty

Calandra, R., Seyfarth, A., Peters, J., Deisenroth, M.

Annals of Mathematics and Artificial Intelligence, pages: 1-19, 2015 (article)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic numerics and uncertainty in computations

Hennig, P., Osborne, M. A., Girolami, M.

Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 471(2179), 2015 (article)

Abstract
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

ei pn

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Model-Based Strategy Selection Learning

Lieder, F., Griffiths, T. L.

The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2015 (article)

Abstract
Humans possess a repertoire of decision strategies. This raises the question how we decide how to decide. Behavioral experiments suggest that the answer includes metacognitive reinforcement learning: rewards reinforce not only our behavior but also the cognitive processes that lead to it. Previous theories of strategy selection, namely SSL and RELACS, assumed that model-free reinforcement learning identifies the cognitive strategy that works best on average across all problems in the environment. Here we explore the alternative: model-based reinforcement learning about how the differential effectiveness of cognitive strategies depends on the features of individual problems. Our theory posits that people learn a predictive model of each strategy’s accuracy and execution time and choose strategies according to their predicted speed-accuracy tradeoff for the problem to be solved. We evaluate our theory against previous accounts by fitting published data on multi-attribute decision making, conducting a novel experiment, and demonstrating that our theory can account for people’s adaptive flexibility in risky choice. We find that while SSL and RELACS are sufficient to explain people’s ability to adapt to a homogeneous environment in which all decision problems are of the same type, model-based strategy selection learning can also explain people’s ability to adapt to heterogeneous environments and flexibly switch to a different decision-strategy when the situation changes.

re

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
The optimism bias may support rational action

Lieder, F., Goel, S., Kwan, R., Griffiths, T. L.

NIPS 2015 Workshop on Bounded Optimality and Rational Metareasoning, 2015 (article)

re

[BibTex]

[BibTex]


no image
Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic

Griffiths, T. L., Lieder, F., Goodman, N. D.

Topics in Cognitive Science, 7(2):217-229, Wiley, 2015 (article)

re

[BibTex]

[BibTex]

2013


3-D Object Reconstruction of Symmetric Objects by Fusing Visual and Tactile Sensing
3-D Object Reconstruction of Symmetric Objects by Fusing Visual and Tactile Sensing

Illonen, J., Bohg, J., Kyrki, V.

The International Journal of Robotics Research, 33(2):321-341, Sage, October 2013 (article)

Abstract
In this work, we propose to reconstruct a complete 3-D model of an unknown object by fusion of visual and tactile information while the object is grasped. Assuming the object is symmetric, a first hypothesis of its complete 3-D shape is generated. A grasp is executed on the object with a robotic manipulator equipped with tactile sensors. Given the detected contacts between the fingers and the object, the initial full object model including the symmetry parameters can be refined. This refined model will then allow the planning of more complex manipulation tasks. The main contribution of this work is an optimal estimation approach for the fusion of visual and tactile data applying the constraint of object symmetry. The fusion is formulated as a state estimation problem and solved with an iterative extended Kalman filter. The approach is validated experimentally using both artificial and real data from two different robotic platforms.

am

Web DOI Project Page [BibTex]

2013


Web DOI Project Page [BibTex]


Quasi-Newton Methods: A New Direction
Quasi-Newton Methods: A New Direction

Hennig, P., Kiefel, M.

Journal of Machine Learning Research, 14(1):843-865, March 2013 (article)

Abstract
Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.

ei ps pn

website+code pdf link (url) [BibTex]

website+code pdf link (url) [BibTex]


no image
Analytical probabilistic modeling for radiation therapy treatment planning

Bangert, M., Hennig, P., Oelfke, U.

Physics in Medicine and Biology, 58(16):5401-5419, 2013 (article)

ei pn

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Optimal control of reaching includes kinematic constraints

Mistry, M., Theodorou, E., Schaal, S., Kawato, M.

Journal of Neurophysiology, 2013, clmc (article)

Abstract
We investigate adaptation under a reaching task with an acceleration-based force field perturbation designed to alter the nominal straight hand trajectory in a potentially benign manner:pushing the hand of course in one direction before subsequently restoring towards the target. In this particular task, an explicit strategy to reduce motor effort requires a distinct deviation from the nominal rectilinear hand trajectory. Rather, our results display a clear directional preference during learning, as subjects adapted perturbed curved trajectories towards their initial baselines. We model this behavior using the framework of stochastic optimal control theory and an objective function that trades-of the discordant requirements of 1) target accuracy, 2) motor effort, and 3) desired trajectory. Our work addresses the underlying objective of a reaching movement, and we suggest that robustness, particularly against internal model uncertainly, is as essential to the reaching task as terminal accuracy and energy effciency.

am

PDF [BibTex]

PDF [BibTex]


no image
Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors

Ijspeert, A., Nakanishi, J., Pastor, P., Hoffmann, H., Schaal, S.

Neural Computation, (25):328-373, 2013, clmc (article)

Abstract
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by meansof a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Modelling trial-by-trial changes in the mismatch negativity

Lieder, F., Daunizeau, J., Garrido, M. I., Friston, K. J., Stephan, K. E.

{PLoS} {C}omputational {B}iology, 9(2):e1002911, Public Library of Science, 2013 (article)

re

[BibTex]

[BibTex]


no image
A neurocomputational model of the mismatch negativity

Lieder, F., Stephan, K. E., Daunizeau, J., Garrido, M. I., Friston, K. J.

{PLoS Computational Biology}, 9(11):e1003288, Public Library of Science, 2013 (article)

re

[BibTex]

[BibTex]


no image
Optimal distribution of contact forces with inverse-dynamics control

Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.

The International Journal of Robotics Research, 32(3):280-298, March 2013 (article)

Abstract
The development of legged robots for complex environments requires controllers that guarantee both high tracking performance and compliance with the environment. More specifically the control of the contact interaction with the environment is of crucial importance to ensure stable, robust and safe motions. In this contribution we develop an inverse-dynamics controller for floating-base robots under contact constraints that can minimize any combination of linear and quadratic costs in the contact constraints and the commands. Our main result is the exact analytical derivation of the controller. Such a result is particularly relevant for legged robots as it allows us to use torque redundancy to directly optimize contact interactions. For example, given a desired locomotion behavior, we can guarantee the minimization of contact forces to reduce slipping on difficult terrains while ensuring high tracking performance of the desired motion. The main advantages of the controller are its simplicity, computational efficiency and robustness to model inaccuracies. We present detailed experimental results on simulated humanoid and quadruped robots as well as a real quadruped robot. The experiments demonstrate that the controller can greatly improve the robustness of locomotion of the robots.1

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2011


no image
Learning, planning, and control for quadruped locomotion over challenging terrain

Kalakrishnan, Mrinal, Buchli, Jonas, Pastor, Peter, Mistry, Michael, Schaal, S.

International Journal of Robotics Research, 30(2):236-258, February 2011 (article)

am

[BibTex]

2011


[BibTex]


no image
Bayesian robot system identification with input and output noise

Ting, J., D’Souza, A., Schaal, S.

Neural Networks, 24(1):99-108, 2011, clmc (article)

Abstract
For complex robots such as humanoids, model-based control is highly beneficial for accurate tracking while keeping negative feedback gains low for compliance. However, in such multi degree-of-freedom lightweight systems, conventional identification of rigid body dynamics models using CAD data and actuator models is inaccurate due to unknown nonlinear robot dynamic effects. An alternative method is data-driven parameter estimation, but significant noise in measured and inferred variables affects it adversely. Moreover, standard estimation procedures may give physically inconsistent results due to unmodeled nonlinearities or insufficiently rich data. This paper addresses these problems, proposing a Bayesian system identification technique for linear or piecewise linear systems. Inspired by Factor Analysis regression, we develop a computationally efficient variational Bayesian regression algorithm that is robust to ill-conditioned data, automatically detects relevant features, and identifies input and output noise. We evaluate our approach on rigid body parameter estimation for various robotic systems, achieving an error of up to three times lower than other state-of-the-art machine learning methods

am

link (url) [BibTex]

link (url) [BibTex]


no image
Learning variable impedance control

Buchli, J., Stulp, F., Theodorou, E., Schaal, S.

International Journal of Robotics Research, 2011, clmc (article)

Abstract
One of the hallmarks of the performance, versatility, and robustness of biological motor control is the ability to adapt the impedance of the overall biomechanical system to different task requirements and stochastic disturbances. A transfer of this principle to robotics is desirable, for instance to enable robots to work robustly and safely in everyday human environments. It is, however, not trivial to derive variable impedance controllers for practical high degree-of-freedom (DOF) robotic tasks. In this contribution, we accomplish such variable impedance control with the reinforcement learning (RL) algorithm PISq ({f P}olicy {f I}mprovement with {f P}ath {f I}ntegrals). PISq is a model-free, sampling based learning method derived from first principles of stochastic optimal control. The PISq algorithm requires no tuning of algorithmic parameters besides the exploration noise. The designer can thus fully focus on cost function design to specify the task. From the viewpoint of robotics, a particular useful property of PISq is that it can scale to problems of many DOFs, so that reinforcement learning on real robotic systems becomes feasible. We sketch the PISq algorithm and its theoretical properties, and how it is applied to gain scheduling for variable impedance control. We evaluate our approach by presenting results on several simulated and real robots. We consider tasks involving accurate tracking through via-points, and manipulation tasks requiring physical contact with the environment. In these tasks, the optimal strategy requires both tuning of a reference trajectory emph{and} the impedance of the end-effector. The results show that we can use path integral based reinforcement learning not only for planning but also to derive variable gain feedback controllers in realistic scenarios. Thus, the power of variable impedance control is made available to a wide variety of robotic systems and practical applications.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Understanding haptics by evolving mechatronic systems

Loeb, G. E., Tsianos, G.A., Fishel, J.A., Wettels, N., Schaal, S.

Progress in Brain Research, 192, pages: 129, 2011 (article)

am

[BibTex]

[BibTex]


no image
Intelligent Mobility—Autonomous Outdoor Robotics at the DFKI

Joyeux, S., Schwendner, J., Kirchner, F., Babu, A., Grimminger, F., Machowinski, J., Paranhos, P., Gaudig, C.

KI, 25(2):133-139, May 2011 (article)

am

DOI [BibTex]

DOI [BibTex]

2006


no image
Die Effektivität von schriftlichen und graphischen Warnhinweisen auf Zigarettenschachteln

Petersen, L., Lieder, F.

Zeitschrift für Sozialpsychologie, 37(4):245-258, Verlag Hans Huber, 2006 (article)

Abstract
In der vorliegenden Studie wurde die Effektivität von furchterregenden Warnhinweisen bei jugendlichen Rauchern und Raucherinnen analysiert. 336 Raucher/-innen (Durchschnittsalter: 15 Jahre) wurden schriftliche oder graphische Warnhinweise auf Zigarettenpackungen präsentiert (Experimentalbedingungen; n = 96, n = 119), oder sie erhielten keine Warnhinweise (Kontrollbedingung; n = 94). Anschließend wurden die Modellfaktoren des revidierten Modells der Schutzmotivation (Arthur & Quester, 2004) erhoben. Die Ergebnisse stützen die Hypothese, dass die Faktoren «Schweregrad der Schädigung» und «Wahrscheinlichkeit der Schädigung» die Verhaltenswahrscheinlichkeit, weniger oder leichtere Zigaretten zu rauchen, vermittelt über den Mediator «Furcht» beeinflussen. Die Verhaltenswahrscheinlichkeit wurde dagegen nicht von den drei experimentellen Bedingungen beeinflusst. Auch konnten die Faktoren «Handlungswirksamkeitserwartungen» und «Selbstwirksamkeitserwartungen» nicht als Moderatoren des Zusammenhangs zwischen Furcht und Verhaltenswahrscheinlichkeit bestätigt werden.

re

DOI [BibTex]

2006


DOI [BibTex]