Header logo is
Institute Talks

Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitris Tzionas

Deep learning on 3D face reconstruction, modelling and applications

Talk
  • 19 December 2018 • 11:00 12:00
  • Yao Feng
  • PS Aquarium

In this talk, I will present my understanding on 3D face reconstruction, modelling and applications from a deep learning perspective. In the first part of my talk, I will discuss the relationship between representations (point clouds, meshes, etc) and network layers (CNN, GCN, etc) on face reconstruction task, then present my ECCV work PRN which proposed a new representation to help achieve state-of-the-art performance on face reconstruction and dense alignment tasks. I will also introduce my open source project face3d that provides examples for generating different 3D face representations. In the second part of the talk, I will talk some publications in integrating 3D techniques into deep networks, then introduce my upcoming work which implements this. In the third part, I will present how related tasks could promote each other in deep learning, including face recognition for face reconstruction task and face reconstruction for face anti-spoofing task. Finally, with such understanding of these three parts, I will present my plans on 3D face modelling and applications.

Organizers: Timo Bolkart

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.

TBA

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt

Organizers: Matthias Bauer

BodyNet: Volumetric Inference of 3D Human Body Shapes

Talk
  • 10 April 2018 • 16:00 17:00
  • Gül Varol
  • N3.022

Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric body-part segmentation.


A New Perspective on Usability Applied to Robotics

Talk
  • 04 April 2018 • 14:00 15:00
  • Dr. Vincent Berenz
  • Stuttgart 2P4

For many service robots, reactivity to changes in their surroundings is a must. However, developing software suitable for dynamic environments is difficult. Existing robotic middleware allows engineers to design behavior graphs by organizing communication between components. But because these graphs are structurally inflexible, they hardly support the development of complex reactive behavior. To address this limitation, we propose Playful, a software platform that applies reactive programming to the specification of robotic behavior. The front-end of Playful is a scripting language which is simple (only five keywords), yet results in the runtime coordinated activation and deactivation of an arbitrary number of higher-level sensory-motor couplings. When using Playful, developers describe actions of various levels of abstraction via behaviors trees. During runtime an underlying engine applies a mixture of logical constructs to obtain the desired behavior. These constructs include conditional ruling, dynamic prioritization based on resources management and finite state machines. Playful has been successfully used to program an upper-torso humanoid manipulator to perform lively interaction with any human approaching it.

Organizers: Katherine Kuchenbecker Mayumi Mohan Alexis Block


  • Omar Costilla Reyes
  • Aquarium @ PS

Human footsteps can provide a unique behavioural pattern for robust biometric systems. Traditionally, security systems have been based on passwords or security access cards. Biometric recognition deals with the design of security systems for automatic identification or verification of a human subject (client) based on physical and behavioural characteristics. In this talk, I will present spatio-temporal raw and processed footstep data representations designed and evaluated on deep machine learning models based on a two-stream resnet architecture, by using the SFootBD database the largest footstep database to date with more than 120 people and almost 20,000 footstep signals. Our models deliver an artificial intelligence capable of effectively differentiating the fine-grained variability of footsteps between legitimate users (clients) and impostor users of the biometric system. We provide experimental results in 3 critical data-driven security scenarios, according to the amount of footstep data available for model training: at airports security checkpoints (smallest training set), workspace environments (medium training set) and home environments (largest training set). In these scenarios we report state-of-the-art footstep recognition rates.

Organizers: Dimitris Tzionas


  • Silvia Zuffi
  • N3.022

Animals are widespread in nature and the analysis of their shape and motion is of importance in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. In our previous SMAL model, we learn animal shape from toys figurines, but toys are limited in number and realism, and not every animal is sufficiently popular for there to be realistic toys depicting it. What is available in large quantities are images and videos of animals from nature photographs, animal documentaries, and webcams. In this talk I will present our recent work for capturing the detailed 3D shape of animals from images alone. Our method extracts significantly more 3D shape detail than previous work and is able to model new species using only a few video frames. Additionally, we extract realistic texture map from images for capturing both animal shape and appearance.


  • Sergio Pascual Díaz
  • S2.014

My plan is to present the motivation behind Deep GPs as well as some of the current approximate inference schemes available with their limitations. Then, I will explain how Deep GPs fit into the BayesOpt framework and the specific problems they could potentially solve.

Organizers: Philipp Hennig


  • Patrick Bajari
  • MPI IS lecture hall (N0.002)

In academic and policy circles, there has been considerable interest in the impact of “big data” on firm performance. We examine the question of how the amount of data impacts the accuracy of Machine Learned models of weekly retail product forecasts using a proprietary data set obtained from Amazon. We examine the accuracy of forecasts in two relevant dimensions: the number of products (N), and the number of time periods for which a product is available for sale (T). Theory suggests diminishing returns to larger N and T, with relative forecast errors diminishing at rate 1/sqrt(N) + 1/sqrt(T) . Empirical results indicate gains in forecast improvement in the T dimension; as more and more data is available for a particular product, demand forecasts for that product improve over time, though with diminishing returns to scale. In contrast, we find an essentially flat N effect across the various lines of merchandise: with a few exceptions, expansion in the number of retail products within a category does not appear associated with increases in forecast performance. We do find that the firm’s overall forecast performance, controlling for N and T effects across product lines, has improved over time, suggesting gradual improvements in forecasting from the introduction of new models and improved technology.

Organizers: Michel Besserve Michael Hirsch


Political Science and Data Science: What we can learn from each other

IS Colloquium
  • 12 March 2018 • 11:15 12:15
  • Simon Hegelich
  • MPI-IS lecture hall (N0.002)

Political science is integrating computational methods like machine learning into its own toolbox. At the same time the awareness rises that the utilization of machine learning algorithms in our daily life is a highly political issue. These two trends - the integration of computational methods into political science and the political analysis of the digital revolution - form the ground for a new transdisciplinary approach: political data science. Interestingly, there is a rich tradition of crossing the borders of the disciplines, as can be seen in the works of Paul Werbos and Herbert Simon (both political scientists). Building on this tradition and integrating ideas from deep learning and Hegel's philosophy of logic a new perspective on causality might arise.

Organizers: Philipp Geiger


  • Giacomo Garegnani
  • Tübingen, S2 seminar room

We present a novel probabilistic integrator for ordinary differential equations (ODEs) which allows for uncertainty quantification of the numerical error [1]. In particular, we randomise the time steps and build a probability measure on the deterministic solution, which collapses to the true solution of the ODE with the same rate of convergence as the underlying deterministic scheme. The intrinsic nature of the random perturbation guarantees that our probabilistic integrator conserves some geometric properties of the deterministic method it is built on, such as the conservation of first integrals or the symplecticity of the flow. Finally, we present a procedure to incorporate our probabilistic solver into the frame of Bayesian inference inverse problems, showing how inaccurate posterior concentrations given by deterministic methods can be corrected by a probabilistic interpretation of the numerical solution.

Organizers: Hans Kersting


  • Bin Yu
  • Tübingen, IS Lecture Hall (N0.002)

In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title. They will be demonstrated in the context of two collaborative projects in neuroscience and genomics, respectively. The first project in neuroscience uses transfer learning to integrate fitted convolutional neural networks (CNNs) on ImageNet with regression methods to provide predictive and stable characterizations of neurons from the challenging primary visual cortex V4. The second project proposes iterative random forests (iRF) as a stablized RF to seek predictable and interpretable high-order interactions among biomolecules.

Organizers: Michel Besserve


  • Prof. Constantin Rothkopf
  • Tübingen, 3rd Floor Intelligent Systems: Aquarium

Active vision has long put forward the idea, that visual sensation and our actions are inseparable, especially when considering naturalistic extended behavior. Further support for this idea comes from theoretical work in optimal control, which demonstrates that sensing, planning, and acting in sequential tasks can only be separated under very restricted circumstances. The talk will present experimental evidence together with computational explanations of human visuomotor behavior in tasks ranging from classic psychophysical detection tasks to ball catching and visuomotor navigation. Along the way it will touch topics such as the heuristics hypothesis and learning of visual representations. The connecting theme will be that, from the switching of visuomotor behavior in response to changing task-constraints down to cortical visual representations in V1, action and perception are inseparably intertwined in an ambiguous and uncertain world

Organizers: Betty Mohler