Header logo is
Institute Talks

"Exploring” Haptics: Human-Machine Interactive Applications from Mid-Air Laser Haptics to Sensorimotor Skill Learning

Talk
  • 25 February 2019 • 10:30 11:15
  • Hojin Lee
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Haptic technologies in both kinesthetic and tactile aspects benefit a brand-new opportunity to recent human-machine interactive applications. In this talk, I, who believe in that one of the essential role of a researcher is pioneering new insights and knowledge, will present my previous research topics about haptic technologies and human-machine interactive applications in two branches: laser-based mid-air haptics and sensorimotor skill learning. For the former branch, I will introduce our approach named indirect laser radiation and its application. Indirect laser radiation utilizes a laser and a light-absorbing elastic medium to evoke a tapping-like tactile sensation. For the latter, I will introduce our data-driven approach for both modeling and learning of sensorimotor skills (especially, driving) with kinesthetic assistance and artificial neural networks; I call it human-like haptic assistance. To unify two different branches of my earlier studies for exploring the feasibility of the sensory channel named "touch", I will present a general research paradigm for human-machine interactive applications to which current haptic technologies can aim in future.

Organizers: Katherine J. Kuchenbecker

Virtual Reality Based Needle Insertion Simulation With Haptic Feedback: A Psychophysical Study

Talk
  • 25 February 2019 • 11:15 12:00
  • Ravali Gourishetti
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Needle insertion is the most essential skill in medical care; training has to be imparted not only for physicians but also for nurses and paramedics. In most needle insertion procedures, haptic feedback from the needle is the main stimulus that novices are to be trained in. For better patient safety, the classical methods of training the haptic skills have to be replaced with simulators based on new robotic and graphics technologies. The main objective of this work is to develop analytical models of needle insertion (a special case of epidural anesthesia) including the biomechanical and psychophysical concepts that simulate the needle-tissue interaction forces in linear heterogeneous tissues and to validate the model with a series of experiments. The biomechanical and perception models were validated with experiments in two stages: with and without the human intervention. The second stage is the validation using the Turing test with two different experiments: 1) to observe the perceptual difference between the simulated and the physical phantom model, and 2) to verify the effectiveness of perceptual filter between the unfiltered and filtered model response. The results showed that the model could replicate the physical phantom tissues with good accuracy. This can be further extended to a non-linear heterogeneous model. The proposed needle/tissue interaction force models can be used more often in improving realism, performance and enabling future applications in needle simulators in heterogeneous tissue. Needle insertion training simulator was developed with the simulated models using Omni Phantom and clinical trials are conducted for the face validity and construct validity. The face validity results showed that the degree of realism of virtual environments and instruments had the overall lowest mean score and ease of usage and training in hand – eye coordination had the highest mean score. The construct validity results showed that the simulator was able to successfully differentiate force and psychomotor signatures of anesthesiologists with experiences less than 5 years and more than 5 years. For the performance index of the trainees, a novel measure, Just Controllable Difference (JCD) was proposed and a preliminary study on JCD measure is explored using two experiments for the novice. A preliminary study on the use of clinical training simulations, especially needle insertion procedure in virtual environments is emphasized on two objectives: Firstly, measures of force JND with the three fingers and secondly, comparison of these measures in Non-Immersive Virtual Reality (NIVR) to that of the Immersive Virtual Reality (IVR) using psychophysical study with the Force Matching task, Constant Stimuli method, and Isometric Force Probing stimuli. The results showed a better force JND in the IVR compared to that of the NIVR. Also, a simple state observer model was proposed to explain the improvement of force JND in the IVR. This study would quantitatively reinforce the use of the IVR for the design of various medical simulators.

Organizers: Katherine J. Kuchenbecker

Design of functional polymers for biomedical applications

Talk
  • 27 February 2019 • 14:00 15:00
  • Dr. Salvador Borrós Gómez
  • Stuttgart 2P4

Functional polymers can be easily tailored for their interaction with living organismes. In our Group, we have worked during the last 15 years in the development of this kind of polymeric materials with different funcionalities, high biocompatibility and in different forms. In this talk, we will describe the synthesis of thermosensitive thin films that can be used to prevent biofilm formation in medical devices, the preparation of biodegradable polymers specially designed for vectors for gene transfection and a new familliy of zwitterionic polymers that are able to cross intestine mucouse for oral delivery applications. The relationship between structure-functionality- applications will be discussed for every example.

Organizers: Metin Sitti

A new path to understanding biological/human vision: theory and experiments

IS Colloquium
  • 11 March 2019 • 14:00 15:00
  • Zhaoping Li
  • MPI-IS lecture hall (N0.002)

Since Hubel and Wiesel's seminal findings in the primary visual cortex (V1) more than 50 years ago, progress in vision science has been very limited along previous frameworks and schools of thoughts on understanding vision. Have we been asking the right questions? I will show observations motivating the new path. First, a drastic information bottleneck forces the brain to process only a tiny fraction of the massive visual input information; this selection is called the attentional selection, how to select this tiny fraction is critical. Second, a large body of evidence has been accumulating to suggest that the primary visual cortex (V1) is where this selection starts, suggesting that the visual cortical areas along the visual pathway beyond V1 must be investigated in light of this selection in V1. Placing attentional selection as the center stage, a new path to understanding vision is proposed (articulated in my book "Understanding vision: theory, models, and data", Oxford University Press 2014). I will show a first example of using this new path, which aims to ask new questions and make fresh progresses. I will relate our insights to artificial vision systems to discuss issues like top-down feedbacks in hierachical processing, analysis-by-synthesis, and image understanding.

Organizers: Timo Bolkart Aamir Ahmad

  • Dr. František Mach
  • Stuttgart 2P4

The state-of-the-art robotic systems adopting magnetically actuated ferromagnetic bodies or even whole miniature robots have recently become a fast advancing technological field, especially at the nano and microscale. The mesoscale and above all multiscale magnetically guided robotic systems appear to be the advanced field of study, where it is difficult to reflect different forces, precision and also energy demands. The major goal of our talk is to discuss the challenges in the field of magnetically guided mesoscale and multiscale actuation, followed by the results of our research in the field of magnetic positioning systems and the magnetic soft-robotic grippers.

Organizers: Metin Sitti


Recognizing the Pain Expressions of Horses

Talk
  • 10 December 2018 • 14:00 15:00
  • Prof. Dr. Hedvig Kjellström
  • Aquarium (N3.022)

Recognition of pain in horses and other animals is important, because pain is a manifestation of disease and decreases animal welfare. Pain diagnostics for humans typically includes self-evaluation and location of the pain with the help of standardized forms, and labeling of the pain by an clinical expert using pain scales. However, animals cannot verbalize their pain as humans can, and the use of standardized pain scales is challenged by the fact that animals as horses and cattle, being prey animals, display subtle and less obvious pain behavior - it is simply beneficial for a prey animal to appear healthy, in order lower the interest from predators. We work together with veterinarians to develop methods for automatic video-based recognition of pain in horses. These methods are typically trained with video examples of behavioral traits labeled with pain level and pain characteristics. This automated, user independent system for recognition of pain behavior in horses will be the first of its kind in the world. A successful system might change the concept for how we monitor and care for our animals.


Robot Learning for Advanced Manufacturing – An Overview

Talk
  • 10 December 2018 • 11:00 12:00
  • Dr. Eugen Solowjow
  • MPI-IS Stuttgart, seminar room 2P4

A dominant trend in manufacturing is the move toward small production volumes and high product variability. It is thus anticipated that future manufacturing automation systems will be characterized by a high degree of autonomy, and must be able to learn new behaviors without explicit programming. Robot Learning, and more generic, Autonomous Manufacturing, is an exciting research field at the intersection of Machine Learning and Automation. The combination of "traditional" control techniques with data-driven algorithms holds the promise of allowing robots to learn new behaviors through experience. This talk introduces selected Siemens research projects in the area of Autonomous Manufacturing.

Organizers: Sebastian Trimpe Friedrich Solowjow


  • Prof. Holger Stark
  • Stuttgart 2P4

Active motion of biological and artificial microswimmers is relevant in the real world, in microfluidics, and biological applications but also poses fundamental questions in non-equi- librium statistical physics. Mechanisms of single microswimmers either designed by nature or in the lab need to be understood and a detailed modeling of microorganisms helps to explore their complex cell design and their behavior. It also motivates biomimetic approaches. The emergent collective motion of microswimmers generates appealing dynamic patterns as a consequence of the non-equilibrium.

Organizers: Metin Sitti Zoey Davidson


  • Umar Iqbal
  • PS Aquarium

In this talk, I will present an overview of my Ph.D. research towards articulated human pose estimation from unconstrained images and videos. In the first part of the talk, I will present an approach to jointly model multi-person pose estimation and tracking in a single formulation. The approach represents body joint detections in a video by a spatiotemporal graph and solves an integer linear program to partition the graph into sub-graphs that correspond to plausible body pose trajectories for each person. I will also introduce the PoseTrack dataset and benchmark which is now the de-facto standard for multi-person pose estimation and tracking. In the second half of the talk, I will present a new method for 3D pose estimation from a monocular image through a novel 2.5D pose representation. The new 2.5D representation can be reliably estimated from an RGB image. Furthermore, it allows to exactly reconstruct the absolute 3D body pose up to a scaling factor, which can be estimated additionally if a prior of the body size is given. I will also describe a novel CNN architecture to implicitly learn the heatmaps and depth-maps for human body key-points from a single RGB image.

Organizers: Dimitris Tzionas


  • Prof. Dr. Rahmi Oklu
  • 3P02

Minimally invasive approaches to vascular disease and cancer have revolutionized medicine. I will discuss novel approaches to vascular bleeding, aneurysm treatment and tumor ablation.

Organizers: Metin Sitti


  • Prof. Eric Tytell
  • MPI-IS Stuttgart, Werner-Köster lecture hall

Many fishes swim efficiently over long distances to find food or during migrations. They also have to accelerate rapidly to escape predators. These two behaviors require different body mechanics: for efficient swimming, fish should be very flexible, but for rapid acceleration, they should be stiffer. Here, I will discuss recent experiments that show that they can use their muscles to tune their effective body mechanics. Control strategies inspired by the muscle activity in fishes may help design better soft robotic devices.

Organizers: Ardian Jusufi


  • Prof. Dr. Stefan Roth
  • N0.002

Supervised learning with deep convolutional networks is the workhorse of the majority of computer vision research today. While much progress has been made already, exploiting deep architectures with standard components, enormous datasets, and massive computational power, I will argue that it pays to scrutinize some of the components of modern deep networks. I will begin with looking at the common pooling operation and show how we can replace standard pooling layers with a perceptually-motivated alternative, with consistent gains in accuracy. Next, I will show how we can leverage self-similarity, a well known concept from the study of natural images, to derive non-local layers for various vision tasks that boost the discriminative power. Finally, I will present a lightweight approach to obtaining predictive probabilities in deep networks, allowing to judge the reliability of the prediction.

Organizers: Michael Black


A fine-grained perspective onto object interactions

Talk
  • 30 October 2018 • 10:30 11:30
  • Dima Damen
  • N0.002

This talk aims to argue for a fine-grained perspective onto human-object interactions, from video sequences. I will present approaches for the understanding of ‘what’ objects one interacts with during daily activities, ‘when’ should we label the temporal boundaries of interactions, ‘which’ semantic labels one can use to describe such interactions and ‘who’ is better when contrasting people perform the same interaction. I will detail my group’s latest works on sub-topics related to: (1) assessing action ‘completion’ – when an interaction is attempted but not completed [BMVC 2018], (2) determining skill or expertise from video sequences [CVPR 2018] and (3) finding unequivocal semantic representations for object interactions [ongoing work]. I will also introduce EPIC-KITCHENS 2018, the recently released largest dataset of object interactions in people’s homes, recorded using wearable cameras. The dataset includes 11.5M frames fully annotated with objects and actions, based on unique annotations from the participants narrating their own videos, thus reflecting true intention. Three open challenges are now available on object detection, action recognition and action anticipation [http://epic-kitchens.github.io]

Organizers: Mohamed Hassan


Artificial Haptic Intelligence for Human-Machine Systems

IS Colloquium
  • 25 October 2018 • 11:00 11:00
  • Veronica J. Santos
  • N2.025 at MPI-IS in Tübingen

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

Organizers: Katherine J. Kuchenbecker Adam Spiers