Header logo is
Institute Talks

Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitris Tzionas

Deep learning on 3D face reconstruction, modelling and applications

Talk
  • 19 December 2018 • 11:00 12:00
  • Yao Feng
  • PS Aquarium

In this talk, I will present my understanding on 3D face reconstruction, modelling and applications from a deep learning perspective. In the first part of my talk, I will discuss the relationship between representations (point clouds, meshes, etc) and network layers (CNN, GCN, etc) on face reconstruction task, then present my ECCV work PRN which proposed a new representation to help achieve state-of-the-art performance on face reconstruction and dense alignment tasks. I will also introduce my open source project face3d that provides examples for generating different 3D face representations. In the second part of the talk, I will talk some publications in integrating 3D techniques into deep networks, then introduce my upcoming work which implements this. In the third part, I will present how related tasks could promote each other in deep learning, including face recognition for face reconstruction task and face reconstruction for face anti-spoofing task. Finally, with such understanding of these three parts, I will present my plans on 3D face modelling and applications.

Organizers: Timo Bolkart

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.

TBA

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt

Organizers: Matthias Bauer

  • Mariacarla Memeo
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

The increasing availability of on-line resources and the widespread practice of storing data over the internet arise the problem of their accessibility for visually impaired people. A translation from the visual domain to the available modalities is therefore necessary to study if this access is somewhat possible. However, the translation of information from vision to touch is necessarily impaired due to the superiority of vision during the acquisition process. Yet, compromises exist as visual information can be simplified, sketched. A picture can become a map. An object can become a geometrical shape. Under some circumstances, and with a reasonable loss of generality, touch can substitute vision. In particular, when touch substitutes vision, data can be differentiated by adding a further dimension to the tactile feedback, i.e. extending tactile feedback to three dimensions instead of two. This mode has been chosen because it mimics our natural way of following object profiles with fingers. Specifically, regardless if a hand lying on an object is moving or not, our tactile and proprioceptive systems are both stimulated and tell us something about which object we are manipulating, what can be its shape and size. The goal of this talk is to describe how to exploit tactile stimulation to render digital information non visually, so that cognitive maps associated with this information can be efficiently elicited from visually impaired persons. In particular, the focus is to deliver geometrical information in a learning scenario. Moreover, a completely blind interaction with virtual environment in a learning scenario is something little investigated because visually impaired subjects are often passive agents of exercises with fixed environment constraints. For this reason, during the talk I will provide my personal answer to the question: can visually impaired people manipulate dynamic virtual content through touch? This process is much more challenging than only exploring and learning a virtual content, but at the same time it leads to a more conscious and dynamic creation of the spatial understanding of an environment during tactile exploration.

Organizers: Katherine Kuchenbecker


  • Gokhan Serhat
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Continuum structures need to be designed for optimal vibrational characteristics in various fields. Recent developments in the finite element analysis (FEA) and numerical optimization methods allow creating more accurate computational models, which favors designing superior systems and reduces the need for experimentation. In this talk, I will present my work on FEA-based optimization of thin shell structures for improved dynamic properties where the focus will be on laminated composites. I will initially explain multi-objective optimization strategies for enhancing load-carrying and vibrational performance of plate structures. The talk will continue with the design of curved panels for optimal free and forced dynamic responses. After that, I will present advanced methods that I developed for modeling and optimization of variable-stiffness structures. Finally, I will outline the state-of-the-art techniques regarding numerical simulation of the finger in contact with surfaces and propose potential research directions.

Organizers: Katherine Kuchenbecker


Soft Feel by Soft Robotic Hand: New way of robotic sensing

IS Colloquium
  • 04 October 2018 • 13:30 - 04 September 2018 • 14:30
  • Prof. Koh Hosoda
  • MPI-IS Stuttgart, Werner-Köster lecture hall

This lecture will show some interesting examples how soft body/skin will change your idea of robotic sensing. Soft Robotics does not only discuss about compliance and safety; soft structure will change the way to categorize objects by dynamic exploration and enables the robot to learn sense of slip. Soft Robotics will entirely change your idea how to design sensing and open up a new way to understand human sensing.

Organizers: Ardian Jusufi


  • Prof. Peter Pott
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

The FLEXMIN haptic robotic system is a single-port tele-manipulator for robotic surgery in the small pelvis. Using a transanal approach it allows bi-manual tasks such as grasping, monopolar cutting, and suturing with a footprint of Ø 160 x 240 mm³. Forces up to 5 N in all direction can be applied easily. In addition to provide low latency and highly dynamic control over its movements, high-fidelity haptic feedback was realised using built-in force sensors, lightweight and friction-optimized kinematics as well as dedicated parallel kinematics input devices. After a brief description of the system and some of its key aspects, first evaluation results will be presented. In the second half of the talk the Institute of Medical Device Technology will be presented. The institute was founded in July 2017 and has ever since started a number of projects in the field of biomedical actuation, medical systems and robotics and advanced light microscopy. To illustrate this a few snapshots of bits and pieces will be presented that are condensation nuclei for the future.

Organizers: Katherine Kuchenbecker


Private Federated Learning

Talk
  • 01 October 2018 • 10:00 10:45
  • Mona Buisson-Fenet
  • MPI-IS Stuttgart, seminar room 2P4

With the expanding collection of data, organisations are becoming more and more aware of the potential gain of combining their data. Analytic and predictive tasks, such as classification, perform more accurately if more features or more data records are available, which is why data providers have an interest in joining their datasets and learning from the obtained database. However, this rising interest for federated learning also comes with an increasing concern about security and privacy, both from the consumers whose data is used, and from the data providers who are liable for protecting it. Securely learning a classifier over joint datasets is a first milestone for private multi-party machine learning, and though some literature exists on that topic, systems providing a better security-utility trade-off and more theoretical guarantees are still needed. An ongoing issue is how to deal with the loss gradients, which often need to be revealed in the clear during training. We show that this constitutes an information leak, and present an alternative optimisation strategy that provides additional security guarantees while limiting the decrease in performance of the obtained classifier. Combining an encryption-based and a noise-based approach, the proposed method enables several parties to jointly train a binary classifier over vertically partitioned datasets while keeping their data private.

Organizers: Sebastian Trimpe


  • Dr. Aude Bolopion and Dr. Mich
  • 2P4

This talk presents an overview of recent activities of FEMTO-ST institute in the field of micro-nanomanipulation fo both micro nano assembly and biomedical applications. Microrobotic systems are currently limited by the number of degree of freedom addressed and also are very limited by their throughput. Two ways can be considered to improve both the velocity and the degrees of freedom: non-contact manipulation and dexterous micromanipulation. Indeed in both ways movement including rotation and translation are done locally and are only limited by the micro-nano-objects inertia which is very low. It consequently enable to generate 6DOF and to induce high dynamics. The talk presents recent works which have shown that controlled trajectories in non contact manipulation enable to manipulate micro-objects in high speed. Dexterous manipulation on a 4 fingers microtweezers have been also experimented and show that in-hand micromanipulations are possible in micro-nanoscale based on original finger trajectory planning. These two approaches have been applied to perform micro-nano-assemby and biomedical operations


Learning to align images and surfaces

Talk
  • 24 September 2018 • 11:00 12:00
  • Iasonas Kokkinos
  • Ground Floor Seminar Room (N0.002)

In this talk I will be presenting recent work on combining ideas from deformable models with deep learning. I will start by describing DenseReg and DensePose, two recently introduced systems for establishing dense correspondences between 2D images and 3D surface models ``in the wild'', namely in the presence of background, occlusions, and multiple objects. For DensePose in particular we introduce DensePose-COCO, a large-scale dataset for dense pose estimation, and DensePose-RCNN, a system which operates at multiple frames per second on a single GPU while handling multiple humans simultaneously. I will then present Deforming AutoEncoders, a method for unsupervised dense correspondence estimation. We show that we can disentangle deformations from appearance variation in an entirely unsupervised manner, and also provide promising results for a more thorough disentanglement of images into deformations, albedo and shading. Time permitting we will discuss a parallel line of work aiming at combining grouping with deep learning, and see how both grouping and correspondence can be understood as establishing associations between neurons.

Organizers: Vassilis Choutas


Visual Reconstruction and Image-Based Rendering

Talk
  • 07 September 2018 • 11:00 12:00
  • Richard Szeliski
  • Ground Floor Seminar Room (N0.002)

The reconstruction of 3D scenes and their appearance from imagery is one of the longest-standing problems in computer vision. Originally developed to support robotics and artificial intelligence applications, it has found some of its most widespread use in support of interactive 3D scene visualization. One of the keys to this success has been the melding of 3D geometric and photometric reconstruction with a heavy re-use of the original imagery, which produces more realistic rendering than a pure 3D model-driven approach. In this talk, I give a retrospective of two decades of research in this area, touching on topics such as sparse and dense 3D reconstruction, the fundamental concepts in image-based rendering and computational photography, applications to virtual reality, as well as ongoing research in the areas of layered decompositions and 3D-enabled video stabilization.

Organizers: Mohamed Hassan


Imitation of Human Motion Planning

Talk
  • 27 July 2018 • 12:00 12:45
  • Jim Mainprice
  • N3.022 (Aquarium)

Humans act upon their environment through motion, the ability to plan their movements is therefore an essential component of their autonomy. In recent decades, motion planning has been widely studied in robotics and computer graphics. Nevertheless robots still fail to achieve human reactivity and coordination. The need for more efficient motion planning algorithms has been present through out my own research on "human-aware" motion planning, which aims to take the surroundings humans explicitly into account. I believe imitation learning is the key to this particular problem as it allows to learn both, new motion skills and predictive models, two capabilities that are at the heart of "human-aware" robots while simultaneously holding the promise of faster and more reactive motion generation. In this talk I will present my work in this direction.


New Ideas for Stereo Matching of Untextured Scenes

Talk
  • 24 July 2018 • 14:00 15:00
  • Daniel Scharstein
  • Ground Floor Seminar Room (N0.002)

Two talks for the price of one! I will present my recent work on the challenging problem of stereo matching of scenes with little or no surface texture, attacking the problem from two very different angles. First, I will discuss how surface orientation priors can be added to the popular semi-global matching (SGM) algorithm, which significantly reduces errors on slanted weakly-textured surfaces. The orientation priors serve as a soft constraint during matching and can be derived in a variety of ways, including from low-resolution matching results and from monocular analysis and Manhattan-world assumptions. Second, we will examine the pathological case of Mondrian Stereo -- synthetic scenes consisting solely of solid-colored planar regions, resembling paintings by Piet Mondrian. I will discuss assumptions that allow disambiguating such scenes, present a novel stereo algorithm employing symbolic reasoning about matched edge segments, and discuss how similar ideas could be utilized in robust real-world stereo algorithms for untextured environments.

Organizers: Anurag Ranjan