Autonomous Robotic Manipulation
Modeling Top-Down Saliency for Visual Object Search
Interactive Perception
State Estimation and Sensor Fusion for the Control of Legged Robots
Probabilistic Object and Manipulator Tracking
Global Object Shape Reconstruction by Fusing Visual and Tactile Data
Robot Arm Pose Estimation as a Learning Problem
Learning to Grasp from Big Data
Gaussian Filtering as Variational Inference
Template-Based Learning of Model Free Grasping
Associative Skill Memories
Real-Time Perception meets Reactive Motion Generation
Autonomous Robotic Manipulation
Learning Coupling Terms of Movement Primitives
State Estimation and Sensor Fusion for the Control of Legged Robots
Inverse Optimal Control
Motion Optimization
Optimal Control for Legged Robots
Movement Representation for Reactive Behavior
Associative Skill Memories
Real-Time Perception meets Reactive Motion Generation
Research Overview
The Autonomous Motion Department has its focus on research in intelligent systems that can move, perceive, and learn from experiences. We are interested in understanding, how autonomous movement systems can bootstrap themselves into competent behavior by starting from a relatively simple set of algorithms and pre-structuring, and then learning from interacting with the environment. Using instructions from a teacher to get started can add useful prior information. Performing trial and error learning to improve movement skills and perceptual skills is another domain of our research. We are interested in investigating such perception-action-learning loops in biological systems and robotic systems, which can range in scale from nano systems (cells, nano-robots) to macro systems (humans, and humanoid robots).
One part of our research is concerned with learning in neural networks, statistical learning, and machine learning, since the ability of learning and self-organization seems to be among the most important prerequisites of autonomous systems.
Another part of the research program focuses on how movement can be generated, in particular in human-like systems with bodies, limbs, and eyes. This research touches the fields of control theory, nonlinear control, nonlinear dynamics, optimization theory, and reinforcement learning.
In a third research branch, we investigate perception, in particular 3D perception with vision, tactile, and acoustic senses. A special emphasis lies on understanding active perception processes, i.e., how action and perception can assist each other for better performance and robustness.
A fourth component of our work is concerned with human performance by measuring their movements in specially design behavioral tasks, and also by measuring their brain activities with neuroimaging techniques. Such research connects closely to work in Computational Neuroscience for motor control, and it includes abstract functional models of how brains may organize sensorimotor coordination.
Finally, a large part of the research in lab emphasizes studies with actual humanoid and biologically inspired robots. With this work, we are first interested in testing our learning and control theories with real physical systems in order to evaluate the robustness of our research results. Another challenge arises due to the scalability of our methods towards complex robot: our most advanced robot requires the nonlinear control of over 50 physical degrees of freedom that need to be coordinated with visual, tactile, and acoustic perception. When attempting to synthesize behavior with such a machine, the shortcomings of state-of-the-art learning and control theories can be discovered and addressed in subsequent research. Finally, we also use humanoid robots for direct comparisons in behavioral experiments in which the robot is treated like a regular human subject.
Please see below for more information on the current research areas and projects at the Autonomous Motion Department.