Header logo is
Institute Talks

Using RGB-D cameras for scene streaming and navigation in virtual reality

Talk
  • 26 April 2019 • 11:45 12:15
  • Marilyn Keller
  • Aquarium

Since the release of the Kinect, RGB-D cameras have been used in several consumer devices, including smartphones. In this talk, I will present two challenging uses of this technology. With multiple RGB-D cameras, it is possible to reconstruct a 3D scene and visualize it from any point of view. In the first part of the talk, I will show how such a scene can be streamed and rendered as a point cloud in a compelling way and its appearance improved by the use of external cinema cameras. In the second part of the talk, I will present my work on how an RGB-D camera can be used for enabling real-walking in virtual reality by making the user aware of the surrounding obstacles. I present a pipeline to create an occupancy map from a point cloud on the fly on a mobile phone used as a virtual reality headset. This occupancy map can then be used to prevent the user from hitting physical obstacles when walking in the virtual scene.

Organizers: Sergi Pujades

Soft Feel by Soft Robotic Hand: New way of robotic sensing

IS Colloquium
  • 04 October 2018 • 13:30 - 04 September 2018 • 14:30
  • Prof. Koh Hosoda
  • MPI-IS Stuttgart, Werner-Köster lecture hall

This lecture will show some interesting examples how soft body/skin will change your idea of robotic sensing. Soft Robotics does not only discuss about compliance and safety; soft structure will change the way to categorize objects by dynamic exploration and enables the robot to learn sense of slip. Soft Robotics will entirely change your idea how to design sensing and open up a new way to understand human sensing.

Organizers: Ardian Jusufi


  • Prof. Peter Pott
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

The FLEXMIN haptic robotic system is a single-port tele-manipulator for robotic surgery in the small pelvis. Using a transanal approach it allows bi-manual tasks such as grasping, monopolar cutting, and suturing with a footprint of Ø 160 x 240 mm³. Forces up to 5 N in all direction can be applied easily. In addition to provide low latency and highly dynamic control over its movements, high-fidelity haptic feedback was realised using built-in force sensors, lightweight and friction-optimized kinematics as well as dedicated parallel kinematics input devices. After a brief description of the system and some of its key aspects, first evaluation results will be presented. In the second half of the talk the Institute of Medical Device Technology will be presented. The institute was founded in July 2017 and has ever since started a number of projects in the field of biomedical actuation, medical systems and robotics and advanced light microscopy. To illustrate this a few snapshots of bits and pieces will be presented that are condensation nuclei for the future.

Organizers: Katherine J. Kuchenbecker


Private Federated Learning

Talk
  • 01 October 2018 • 10:00 10:45
  • Mona Buisson-Fenet
  • MPI-IS Stuttgart, seminar room 2P4

With the expanding collection of data, organisations are becoming more and more aware of the potential gain of combining their data. Analytic and predictive tasks, such as classification, perform more accurately if more features or more data records are available, which is why data providers have an interest in joining their datasets and learning from the obtained database. However, this rising interest for federated learning also comes with an increasing concern about security and privacy, both from the consumers whose data is used, and from the data providers who are liable for protecting it. Securely learning a classifier over joint datasets is a first milestone for private multi-party machine learning, and though some literature exists on that topic, systems providing a better security-utility trade-off and more theoretical guarantees are still needed. An ongoing issue is how to deal with the loss gradients, which often need to be revealed in the clear during training. We show that this constitutes an information leak, and present an alternative optimisation strategy that provides additional security guarantees while limiting the decrease in performance of the obtained classifier. Combining an encryption-based and a noise-based approach, the proposed method enables several parties to jointly train a binary classifier over vertically partitioned datasets while keeping their data private.

Organizers: Sebastian Trimpe


  • Dr. Aude Bolopion and Dr. Mich
  • 2P4

This talk presents an overview of recent activities of FEMTO-ST institute in the field of micro-nanomanipulation fo both micro nano assembly and biomedical applications. Microrobotic systems are currently limited by the number of degree of freedom addressed and also are very limited by their throughput. Two ways can be considered to improve both the velocity and the degrees of freedom: non-contact manipulation and dexterous micromanipulation. Indeed in both ways movement including rotation and translation are done locally and are only limited by the micro-nano-objects inertia which is very low. It consequently enable to generate 6DOF and to induce high dynamics. The talk presents recent works which have shown that controlled trajectories in non contact manipulation enable to manipulate micro-objects in high speed. Dexterous manipulation on a 4 fingers microtweezers have been also experimented and show that in-hand micromanipulations are possible in micro-nanoscale based on original finger trajectory planning. These two approaches have been applied to perform micro-nano-assemby and biomedical operations


Learning to align images and surfaces

Talk
  • 24 September 2018 • 11:00 12:00
  • Iasonas Kokkinos
  • Ground Floor Seminar Room (N0.002)

In this talk I will be presenting recent work on combining ideas from deformable models with deep learning. I will start by describing DenseReg and DensePose, two recently introduced systems for establishing dense correspondences between 2D images and 3D surface models ``in the wild'', namely in the presence of background, occlusions, and multiple objects. For DensePose in particular we introduce DensePose-COCO, a large-scale dataset for dense pose estimation, and DensePose-RCNN, a system which operates at multiple frames per second on a single GPU while handling multiple humans simultaneously. I will then present Deforming AutoEncoders, a method for unsupervised dense correspondence estimation. We show that we can disentangle deformations from appearance variation in an entirely unsupervised manner, and also provide promising results for a more thorough disentanglement of images into deformations, albedo and shading. Time permitting we will discuss a parallel line of work aiming at combining grouping with deep learning, and see how both grouping and correspondence can be understood as establishing associations between neurons.

Organizers: Vassilis Choutas


Visual Reconstruction and Image-Based Rendering

Talk
  • 07 September 2018 • 11:00 12:00
  • Richard Szeliski
  • Ground Floor Seminar Room (N0.002)

The reconstruction of 3D scenes and their appearance from imagery is one of the longest-standing problems in computer vision. Originally developed to support robotics and artificial intelligence applications, it has found some of its most widespread use in support of interactive 3D scene visualization. One of the keys to this success has been the melding of 3D geometric and photometric reconstruction with a heavy re-use of the original imagery, which produces more realistic rendering than a pure 3D model-driven approach. In this talk, I give a retrospective of two decades of research in this area, touching on topics such as sparse and dense 3D reconstruction, the fundamental concepts in image-based rendering and computational photography, applications to virtual reality, as well as ongoing research in the areas of layered decompositions and 3D-enabled video stabilization.

Organizers: Mohamed Hassan


Imitation of Human Motion Planning

Talk
  • 27 July 2018 • 12:00 12:45
  • Jim Mainprice
  • N3.022 (Aquarium)

Humans act upon their environment through motion, the ability to plan their movements is therefore an essential component of their autonomy. In recent decades, motion planning has been widely studied in robotics and computer graphics. Nevertheless robots still fail to achieve human reactivity and coordination. The need for more efficient motion planning algorithms has been present through out my own research on "human-aware" motion planning, which aims to take the surroundings humans explicitly into account. I believe imitation learning is the key to this particular problem as it allows to learn both, new motion skills and predictive models, two capabilities that are at the heart of "human-aware" robots while simultaneously holding the promise of faster and more reactive motion generation. In this talk I will present my work in this direction.


New Ideas for Stereo Matching of Untextured Scenes

Talk
  • 24 July 2018 • 14:00 15:00
  • Daniel Scharstein
  • Ground Floor Seminar Room (N0.002)

Two talks for the price of one! I will present my recent work on the challenging problem of stereo matching of scenes with little or no surface texture, attacking the problem from two very different angles. First, I will discuss how surface orientation priors can be added to the popular semi-global matching (SGM) algorithm, which significantly reduces errors on slanted weakly-textured surfaces. The orientation priors serve as a soft constraint during matching and can be derived in a variety of ways, including from low-resolution matching results and from monocular analysis and Manhattan-world assumptions. Second, we will examine the pathological case of Mondrian Stereo -- synthetic scenes consisting solely of solid-colored planar regions, resembling paintings by Piet Mondrian. I will discuss assumptions that allow disambiguating such scenes, present a novel stereo algorithm employing symbolic reasoning about matched edge segments, and discuss how similar ideas could be utilized in robust real-world stereo algorithms for untextured environments.

Organizers: Anurag Ranjan


DensePose: Dense Human Pose Estimation In The Wild

Talk
  • 16 July 2018 • 11:00 12:00
  • Rıza Alp Güler
  • N3.022 (Aquarium)

Non-planar object deformations result in challenging but informative signal variations. We aim to recover this information in a feedforward manner by employing discriminatively trained convolutional networks. We formulate the task as a regression problem and train our networks by leveraging upon manually annotated correspondences between images and 3D surfaces. In this talk, the focus will be on our recent work "DensePose", where we form the "COCO-DensePose" dataset by introducing an efficient annotation pipeline to collect correspondences between 50K persons appearing in the COCO dataset and the SMPL 3D deformable human-body model. We use our dataset to train CNN-based systems that deliver dense correspondences 'in the wild', namely in the presence of background, occlusions, multiple objects and scale variations. We experiment with fully-convolutional networks and region-based DensePose-RCNN model and observe a superiority of the latter; we further improve accuracy through cascading, obtaining a system that delivers highly accurate results in real time (http://densepose.org).

Organizers: Georgios Pavlakos


Learning Control for Intelligent Physical Systems

Talk
  • 13 July 2018 • 14:15 14:45
  • Dr. Sebastian Trimpe
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

Modern technology allows us to collect, process, and share more data than ever before. This data revolution opens up new ways to design control and learning algorithms, which will form the algorithmic foundation for future intelligent systems that shall act autonomously in the physical world. Starting from a discussion of the special challenges when combining machine learning and control, I will present some of our recent research in this exciting area. Using the example of the Apollo robot learning to balance a stick in its hand, I will explain how intelligent agents can learn new behavior from just a few experimental trails. I will also discuss the need for theoretical guarantees in learning-based control, and how we can obtain them by combining learning and control theory.

Organizers: Katherine J. Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler