Header logo is
Institute Talks

Haptic Engineering and Science at Multiple Scales

  • 20 June 2018 • 11:00 12:00
  • Yon Visell, PhD
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

I will describe recent research in my lab on haptics and robotics. It has been a longstanding challenge to realize engineering systems that can match the amazing perceptual and motor feats of biological systems for touch, including the human hand. Some of the difficulties of meeting this objective can be traced to our limited understanding of the mechanics, and to the high dimensionality of the signals, and to the multiple length and time scales - physical regimes - involved. An additional source of richness and complication arises from the sensitive dependence of what we feel on what we do, i.e. on the tight coupling between touch-elicited mechanical signals, object contacts, and actions. I will describe research in my lab that has aimed at addressing these challenges, and will explain how the results are guiding the development of new technologies for haptics, wearable computing, and robotics.

Organizers: Katherine Kuchenbecker

Imitation of Human Motion Planning

  • 29 June 2018 • 12:00 12:45
  • Jim Mainprice
  • N3.022 (Aquarium)

Humans act upon their environment through motion, the ability to plan their movements is therefore an essential component of their autonomy. In recent decades, motion planning has been widely studied in robotics and computer graphics. Nevertheless robots still fail to achieve human reactivity and coordination. The need for more efficient motion planning algorithms has been present through out my own research on "human-aware" motion planning, which aims to take the surroundings humans explicitly into account. I believe imitation learning is the key to this particular problem as it allows to learn both, new motion skills and predictive models, two capabilities that are at the heart of "human-aware" robots while simultaneously holding the promise of faster and more reactive motion generation. In this talk I will present my work in this direction.

Learning Control for Intelligent Physical Systems

  • 13 July 2018 • 14:15 14:45
  • Dr. Sebastian Trimpe
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

Modern technology allows us to collect, process, and share more data than ever before. This data revolution opens up new ways to design control and learning algorithms, which will form the algorithmic foundation for future intelligent systems that shall act autonomously in the physical world. Starting from a discussion of the special challenges when combining machine learning and control, I will present some of our recent research in this exciting area. Using the example of the Apollo robot learning to balance a stick in its hand, I will explain how intelligent agents can learn new behavior from just a few experimental trails. I will also discuss the need for theoretical guarantees in learning-based control, and how we can obtain them by combining learning and control theory.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Household Assistants: the Path from the Care-o-bot Vision to First Products

  • 13 July 2018 • 14:45 15:15
  • Dr. Martin Hägele
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

In 1995 Fraunhofer IPA embarked on a mission towards designing a personal robot assistant for everyday tasks. In the following years Care-O-bot developed into a long-term experiment for exploring and demonstrating new robot technologies and future product visions. The recent fourth generation of the Care-O-bot, introduced in 2014 aimed at designing an integrated system which addressed a number of innovations such as modularity, “low-cost” by making use of new manufacturing processes, and advanced human-user interaction. Some 15 systems were built and the intellectual property (IP) generated by over 20 years of research was recently licensed to a start-up. The presentation will review the path from an experimental platform for building up expertise in various robotic disciplines to recent pilot applications based on the now commercial Care-O-bot hardware.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

The Critical Role of Atoms at Surfaces and Interfaces: Do we really have control? Can we?

  • 13 July 2018 • 15:45 16:15
  • Prof. Dr. Dawn Bonnell
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

With the ubiquity of catalyzed reactions in manufacturing, the emergence of the device laden internet of things, and global challenges with respect to water and energy, it has never been more important to understand atomic interactions in the functional materials that can provide solutions in these spaces.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Interactive Visualization – A Key Discipline for Big Data Analysis

  • 13 July 2018 • 15:00 15:30
  • Prof. Dr. Thomas Ertl
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

Big Data has become the general term relating to the benefits and threats which result from the huge amount of data collected in all parts of society. While data acquisition, storage and access are relevant technical aspects, the analysis of the collected data turns out to be at the core of the Big Data challenge. Automatic data mining and information retrieval techniques have made much progress but many application scenarios remain in which the human in the loop plays an essential role. Consequently, interactive visualization techniques have become a key discipline of Big Data analysis and the field is reaching out to many new application domains. This talk will give examples from current visualization research projects at the University of Stuttgart demonstrating the thematic breadth of application scenarios and the technical depth of the employed methods. We will cover advances in scientific visualization of fields and particles, visual analytics of document collections and movement patterns as well as cognitive aspects.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Reconstructing and Perceiving Humans in Motion

  • 30 November 2017 • 15:00
  • Dr. Gerard Pons-Moll

For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies. Currently, digital models typically lack realistic soft tissue and clothing or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data. We then use such models to extract information out of incomplete and noisy sensor data from monocular video, depth or IMUs. I will give an overview of a selection of projects conducted in Perceiving Systems in which we build realistic models of human pose, shape, soft-tissue and clothing. I will also present some of our recent work on 3D reconstruction of people models from monocular video, real-time fusion and online human body shape estimation from depth data and recovery of human pose in the wild from video and IMUs. I will conclude the talk outlining the next challenges in building digital humans and perceiving them from sensory data.

Organizers: Melanie Feldhofer

  • Professor Brent Gillespie
  • MPI-IS Stuttgart, Heisenbergstr.3, Werner-Köster-Hörsaal 2R 4 and broadcast

Relative to most robots and other machines, the human body is soft, its actuators compliant, and its control quite forgiving. But having a body that bends under load seems like a bad set-up for motor dexterity: the brain is faced with controlling more rather than fewer degrees of freedom. Undeniably, though, the soft body approach leads to superior solutions. Robots are putzes by comparison! While de-putzifying robots (perhaps by making them softer) is an endeavor I will discuss to some degree, in this talk I will focus on the design of robots intended to work cooperatively with humans, using physical interaction and haptic feedback in the axis of control. I will propose a backdrivable robot with forgiving control as a teammate for humans, with the aim of meeting pressing needs in rehabilitation robotics and semi-autonomous driving. In short, my lab is working to create alternatives to the domineering robot who wants complete control. Giving up complete control leads to “slacking” and loss of therapeutic benefit in rehabilitation and loss of vigilance and potential for disaster in driving. Cooperative or shared control is premised on the idea that two heads, especially two heads with complementary capabilities, are better than one. But the two heads must agree on a goal and a motor plan. How can one agent read the motor intent of another using only physical interaction signals? A few old-school control principles from biology and engineering to the rescue! One key is provided by von Holst and Mittelsteadt’s famous Reafference Principle, published in 1950 to describe how a hierarchically organized neural control system distinguishes what they called reafference from exafference—roughly: expected from unexpected. A second key is provided by Francis and Wonham’s Internal Model Principle, published in in 1976 and considered an enabler for the disk drive industry. If we extend the Reafference Principle with model-based control and use the Internal Model Principle to treat predictable exogenous (exafferent) signals, then we arrive at a theory that I will argue puts us into position to extract motor intent and thereby enable effective control sharing between humans and robots. To support my arguments I will present results from a series of experiments in which we asked human participants to move expected and unexpected loads, to track predictable and unpredictable reference signals, to exercise with self-assist and other-assist, and to share control over a simulated car with an automation system.

Organizers: Katherine Kuchenbecker

  • Christoph Mayer
  • S2 Seminar Room (S 2.014)

Variational image processing translates image processing tasks into optimisation problems. The practical success of this approach depends on the type of optimisation problem and on the properties of the ensuing algorithm. A recent breakthrough was to realise that old first-order optimisation algorithms based on operator splitting are particularly suited for modern data analysis problems. Operator splitting techniques decouple complex optimisation problems into many smaller and simpler sub-problems. In this talk I will revise the variational segmentation problem and a common family of algorithms to solve such optimisation problems. I will show that operator splitting leads to a divide-and-conquer strategy that allows to derive simple and massively parallel updates suitable for GPU implementations. The technique decouples the likelihood from the prior term and allows to use a data-driven model estimating the likelihood from data, using for example deep learning. Using a different decoupling strategy together with general consensus optimisation leads to fully distributed algorithms especially suitable for large-scale segmentation problems. Motivating applications are 3d yeast-cell reconstruction and segmentation of histology data.

Organizers: Benjamin Coors

Learning Complex Robot-Environment Interactions

  • 26 October 2017 • 11:00 12:15
  • Jens Kober
  • AMD meeting room

The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.

Organizers: Dieter Büchler

Modern Optimization for Structured Machine Learning

IS Colloquium
  • 23 October 2017 • 11:15 12:15
  • Simon Lacoste-Julien
  • IS Lecture Hall

Machine learning has become a popular application domain for modern optimization techniques, pushing its algorithmic frontier. The need for large scale optimization algorithms which can handle millions of dimensions or data points, typical for the big data era, have brought a resurgence of interest for first order algorithms, making us revisit the venerable stochastic gradient method [Robbins-Monro 1951] as well as the Frank-Wolfe algorithm [Frank-Wolfe 1956]. In this talk, I will review recent improvements on these algorithms which can exploit the structure of modern machine learning approaches. I will explain why the Frank-Wolfe algorithm has become so popular lately; and present a surprising tweak on the stochastic gradient method which yields a fast linear convergence rate. Motivating applications will include weakly supervised video analysis and structured prediction problems.

Organizers: Philipp Hennig

  • Arunkumar Byravan
  • AMD meeting room

The ability to predict how an environment changes based on forces applied to it is fundamental for a robot to achieve specific goals. Traditionally in robotics, this problem is addressed through the use of pre-specified models or physics simulators, taking advantage of prior knowledge of the problem structure. While these models are general and have broad applicability, they depend on accurate estimation of model parameters such as object shape, mass, friction etc. On the other hand, learning based methods such as Predictive State Representations or more recent deep learning approaches have looked at learning these models directly from raw perceptual information in a model-free manner. These methods operate on raw data without any intermediate parameter estimation, but lack the structure and generality of model-based techniques. In this talk, I will present some work that tries to bridge the gap between these two paradigms by proposing a specific class of deep visual dynamics models (SE3-Nets) that explicitly encode strong physical and 3D geometric priors (specifically, rigid body dynamics) in their structure. As opposed to traditional deep models that reason about dynamics/motion a pixel level, we show that the physical priors implicit in our network architectures enable them to reason about dynamics at the object level - our network learns to identify objects in the scene and to predict rigid body rotation and translation per object. I will present results on applying our deep architectures to two specific problems: 1) Modeling scene dynamics where the task is to predict future depth observations given the current observation and an applied action and 2) Real-time visuomotor control of a Baxter manipulator based only on raw depth data. We show that: 1) Our proposed architectures significantly outperform baseline deep models on dynamics modelling and 2) Our architectures perform comparably or better than baseline models for visuomotor control while operating at camera rates (30Hz) and relying on far less information.

Organizers: Franzi Meier

3D lidar mapping: an accurate and performant approach

  • 20 October 2017 • 11:30 12:30
  • Michiel Vlaminck
  • PS Seminar Room (N3.022)

In my talk I will present my work regarding 3D mapping using lidar scanners. I will give an overview of the SLAM problem and its main challenges: robustness, accuracy and processing speed. Regarding robustness and accuracy, we investigate a better point cloud representation based on resampling and surface reconstruction. Moreover, we demonstrate how it can be incorporated in an ICP-based scan matching technique. Finally, we elaborate on globally consistent mapping using loop closures. Regarding processing speed, we propose the integration of our scan matching in a multi-resolution scheme and a GPU-accelerated implementation using our programming language Quasar.

Organizers: Simon Donne

Machine Ethics

  • 20 October 2017 • 11:00 am 12:00 am
  • Michael and Susan Leigh Anderson
  • AMD Seminar Room

We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior.

Organizers: Vincent Berenz

  • Slobodan Ilic and Mira Slavcheva
  • PS Seminar Room (N3.022)

In this talk we will address the problem of 3D reconstruction of rigid and deformable objects from a single depth video stream. Traditional 3D registration techniques, such as ICP and its variants, are wide-spread and effective, but sensitive to initialization and noise due to the underlying correspondence estimation procedure. Therefore, we have developed SDF-2-SDF, a dense, correspondence-free method which aligns a pair of implicit representations of scene geometry, e.g. signed distance fields, by minimizing their direct voxel-wise difference. In its rigid variant, we apply it for static object reconstruction via real-time frame-to-frame camera tracking and posterior multiview pose optimization, achieving higher accuracy and a wider convergence basin than ICP variants. Its extension to scene reconstruction, SDF-TAR, carries out the implicit-to-implicit registration over several limited-extent volumes anchored in the scene and runs simultaneous GPU tracking and CPU refinement, with a lower memory footprint than other SLAM systems. Finally, to handle non-rigidly moving objects, we incorporate the SDF-2-SDF energy in a variational framework, regularized by a damped approximately Killing vector field. The resulting system, KillingFusion, is able to reconstruct objects undergoing topological changes and fast inter-frame motion in near-real time.

Organizers: Fatma Güney

  • Dominik Bach

Under acute threat, biological agents need to choose adaptive actions to survive. In my talk, I will provide a decision-theoretic view on this problem and ask, what are potential computational algorithms for this choice, and how are they implemented in neural circuits. Rational design principles and non-human animal data tentatively suggest a specific architecture that heavily relies on tailored algorithms for specific threat scenarios. Virtual reality computer games provide an opportunity to translate non-human animal tasks to humans and investigate these algorithms across species. I will discuss the specific challenges for empirical inference on underlying neural circuits given such architecture.

Organizers: Michel Besserve