Header logo is
Institute Talks

Artificial haptic intelligence for human-machine systems

IS Colloquium
  • 24 October 2018 • 11:00 12:00
  • Veronica J. Santos
  • 5H7 at MPI-IS in Stuttgart

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

Organizers: Katherine Kuchenbecker

Artificial haptic intelligence for human-machine systems

IS Colloquium
  • 25 October 2018 • 11:00 11:00
  • Veronica J. Santos
  • N2.025 at MPI-IS in Tübingen

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

Organizers: Katherine Kuchenbecker Adam Spiers

A fine-grained perspective onto object interactions

Talk
  • 30 October 2018 • 10:30 11:30
  • Dima Damen
  • N3.022 (Aquarium)

This talk aims to argue for a fine-grained perspective onto human-object interactions, from video sequences. I will present approaches for the understanding of ‘what’ objects one interacts with during daily activities, ‘when’ should we label the temporal boundaries of interactions, ‘which’ semantic labels one can use to describe such interactions and ‘who’ is better when contrasting people perform the same interaction. I will detail my group’s latest works on sub-topics related to: (1) assessing action ‘completion’ – when an interaction is attempted but not completed [BMVC 2018], (2) determining skill or expertise from video sequences [CVPR 2018] and (3) finding unequivocal semantic representations for object interactions [ongoing work]. I will also introduce EPIC-KITCHENS 2018, the recently released largest dataset of object interactions in people’s homes, recorded using wearable cameras. The dataset includes 11.5M frames fully annotated with objects and actions, based on unique annotations from the participants narrating their own videos, thus reflecting true intention. Three open challenges are now available on object detection, action recognition and action anticipation [http://epic-kitchens.github.io]

Organizers: Mohamed Hassan

TBA

IS Colloquium
  • 28 January 2019 • 3pm 4pm
  • Florian Marquardt

Organizers: Matthias Bauer

  • Dr. Hadi Eghlidi
  • MPI-IS Stuttgart, Room 5H7

Investigations and control of biological and synthetic nanoscopic species in liquids at the ultimate resolution of single entity, are important in diverse fields such as biology, medicine, physics, chemistry and emerging field of nanorobotics. Progress made to date on trapping and/or manipulating nanoscopic objects includes methods that use permanently imposed force fields of various kinds, such as optical, electrical and magnetic forces, to counteract their inherent Brownian motion.

Organizers: Peer Fischer Ardian Jusufi


  • Wenzhen Yuan
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Why cannot the current robots act intelligently in the real-world environment? A major challenge lies in the lack of adequate tactile sensing technologies. Robots need tactile sensing to understand the physical environment, and detect the contact states during manipulation. Progress requires advances in the sensing hardware, but also advances in the software that can exploit the tactile signals. We developed a high-resolution tactile sensor, GelSight, which measures the geometry and traction field of the contact surface. For interpreting the high-resolution tactile signal, we utilize both traditional statistical models and deep neural networks. I will describe my research on both exploration and manipulation. For exploration, I use active touch to estimate the physical properties of the objects. The work has included learning the hardness of artificial objects, as well as estimating the general properties of natural objects via autonomous tactile exploration. For manipulation, I study the robot’s ability to detect slip or incipient slip with tactile sensing during grasping. The research helps robots to better understand and flexibly interact with the physical world.

Organizers: Katherine Kuchenbecker


Learning dynamical systems using SMC

IS Colloquium
  • 28 May 2018 • 11:15 12:15
  • Thomas Schön
  • MPI-IS lecture hall (N0.002)

Abstract: Sequential Monte Carlo (SMC) methods (including the particle filters and smoothers) allows us to compute probabilistic representations of the unknown objects in models used to represent for example nonlinear dynamical systems. This talk has three connected parts: 1. A (hopefully pedagogical) introduction to probabilistic modelling of dynamical systems and an explanation of the SMC method. 2. In learning unknown parameters appearing in nonlinear state-space models using maximum likelihood it is natural to make use of SMC to compute unbiased estimates of the intractable likelihood. The challenge is that the resulting optimization problem is stochastic, which recently inspired us to construct a new solution to this problem. 3. A challenge with the above (and in fact with most use of SMC) is that it all quickly becomes very technical. This is indeed the key challenging in spreading the use of SMC methods to a wider group of users. At the same time there are many researchers who would benefit a lot from having access to these methods in their daily work and for those of us already working with them it is essential to reduce the amount of time spent on new problems. We believe that the solution to this can be provided by probabilistic programming. We are currently developing a new probabilistic programming language that we call Birch. A pre-release is available from birch-lang.org/ It allow users to use SMC methods without having to implement the algorithms on their own.

Organizers: Philipp Hennig


Making Haptics and its Design Accessible

IS Colloquium
  • 28 May 2018 • 11:00 12:00
  • Karon MacLean
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Today’s advances in tactile sensing and wearable, IOT and context-aware computing are spurring new ideas about how to configure touch-centered interactions in terms of roles and utility, which in turn expose new technical and social design questions. But while haptic actuation, sensing and control are improving, incorporating them into a real-world design process is challenging and poses a major obstacle to adoption into everyday technology. Some classes of haptic devices, e.g., grounded force feedback, remain expensive and limited in range. I’ll describe some recent highlights of an ongoing effort to understand how to support haptic designers and end-users. These include a wealth of online experimental design tools, and DIY open sourced hardware and accessible means of creating, for example, expressive physical robot motions and evolve physically sensed expressive tactile languages. Elsewhere, we are establishing the value of haptic force feedback in embodied learning environments, to help kids understand physics and math concepts. This has inspired the invention of a low-cost, handheld and large motion force feedback device that can be used in online environments or collaborative scenarios, and could be suitable for K-12 school contexts; this is ongoing research with innovative education and technological elements. All our work is available online, where possible as web tools, and we plan to push our research into a broader openhaptics effort.

Organizers: Katherine Kuchenbecker


Digital Humans At Disney Research

IS Colloquium
  • 25 May 2018 • 11:00 12:00
  • Thabo Beeler
  • MPI-IS lecture hall (N0.002)

Disney Research has been actively pushing the state-of-the-art in digitizing humans over the past decade, impacting both academia and industry. In this talk I will give an overview of a selected few projects in this area, from research into production. I will be talking about photogrammetric shape acquisition and dense performance capture for faces, eye and teeth scanning and parameterization, as well as physically based capture and modelling for hair and volumetric tissues.

Organizers: Timo Bolkart


  • Emily BJ Coffey
  • MPI IS Lecture hall (N0.002)

In this talk I will describe the main types of research questions and neuroimaging tools used in my work in human cognitive neuroscience (with foci in audition and sleep), some of the existing approaches used to analyze our data, and their limitations. I will then discuss the main practical obstacles to applying machine learning methods in our field. Several of my ongoing and planned projects include research questions that could be addressed and perhaps considerably extended using machine learning approaches; I will describe some specific datasets and problems, with the goal of exploring ideas and potentially opportunities for collaboration.

Organizers: Mara Cascianelli


  • Dr. Islam S. M. Khali
  • Stuttgart 2P4

Mechanical removal of blood clots is a promising approach towards the treatment of vascular diseases caused by the pathological clot formation in the circulatory system. These clots can form and travel to deep seated regions in the circulatory system, and result in significant problems as blood flow past the clot is obstructed. A microscopi-cally small helical microrobot offers great promise in the minimally-invasive removal of these clots. These helical microrobots are powered and controlled remotely using externally-applied magnetic fields for motion in two- and three-dimensional spaces. This talk will describe the removal of blood clots in vitro using a helical robot under ultrasound guidance. The talk will briefly introduce the interactions between the helical microrobot and the fibrin network of the blood clots during its removal. It will also introduce the challenges unique to medical imaging at micro-scale, followed by the concepts and theory of the closed-loop motion control using ultrasound feedback. It will then cover the latest experimental results for helical and flagellated microrobots and their biomedical and nanotechnology applications.

Organizers: Metin Sitti


  • Daniel Renjewski
  • 2p4

Daniel Renjewski presents research in bipedal gait mechanisms: 'Passive mechanisms for increased power and efficiency in bipedal gait’


  • Dr. Yiğit Mengüç
  • Room 3P02 - Stuttgart

Incredible biological capabilities have emerged through evolution. Of special note is the material intelligence that defines the bodies of living things, blurring the line between brain and body. Material robotics research takes the approach of imbuing power, control, sensing, and actuation into all aspects of a (primarily soft) robot body. In this talk, the research topics of material robotics currently underway in the mLab at Oregon State University will be presented. Soft active materials designed and researched in the mLab include liquid metal, biodegradable elastomers, and electroactive fluids. Bioinspired mechanisms include octopus-inspired soft muscles, gecko-inspired adhesives, and snake-like locomotors. Such capabilities, however, introduce new fundamental challenge in making materially-enabled robots. To address these limitation, the mLab is also innovating in techniques to rapidly and scalably manufacture soft materials. Though significant challenges remain to be solved, the development of such soft and materially-enabled components promises to bring robots more and more into our daily lives.

Organizers: Metin Sitti


  • JP Lewis
  • PS Aquarium, 3rd floor, north, MPI-IS

The definition of art has been debated for more than 1000 years, and continues to be a puzzle. While scientific investigations offer hope of resolving this puzzle, machine learning classifiers that discriminate art from non-art images generally do not provide an explicit definition, and brain imaging and psychological theories are at present too coarse to provide a formal characterization. In this work, rather than approaching the problem using a machine learning approach trained on existing artworks, we hypothesize that art can be defined in terms of preexisting properties of the visual cortex. Specifically, we propose that a broad subset of visual art can be defined as patterns that are exciting to a visual brain. Resting on the finding that artificial neural networks trained on visual tasks can provide predictive models of processing in the visual cortex, our definition is operationalized by using a trained deep net as a surrogate “visual brain”, where “exciting” is defined as the activation energy of particular layers of this net. We find that this definition easily discriminates a variety of art from non-art, and further provides a ranking of art genres that is consistent with our subjective notion of ‘visually exciting’. By applying a deep net visualization technique, we can also validate the definition by generating example images that would be classified as art. The images synthesized under our definition resemble visually exciting art such as Op Art and other human- created artistic patterns.

Organizers: Michael Black