Header logo is

Institute Talks

Robotic Manipulation: a Focus on Object Handovers

Talk
  • 09 June 2020 • 10:00—11:00
  • Valerio Ortenzi
  • remote talk on Zoom

Humans perform object manipulation in order to execute a specific task. Seldom is such action started with no goal in mind. In contrast, traditional robotic grasping (first stage for object manipulation) seems to focus purely on getting hold of the object—neglecting the goal of the manipulation. In this light, most metrics used in robotic grasping do not account for the final task in their judgement of quality and success. Since the overall goal of a manipulation task shapes the actions of humans and their grasps, the task itself should shape the metric of success. To this end, I will present a new metric centred on the task. The task is also very important in another action of object manipulation: the object handover. In the context of object handovers, humans display a high degree of flexibility and adaptation. These characteristics are key for robots to be able to interact with the same fluency and efficiency with humans. I will present my work on human-human and robot-human handovers and explain why an understanding of the task is of importance for robotic grasping.

Organizers: Katherine J. Kuchenbecker


AirCap – Aerial Outdoor Motion Capture

Talk
  • 18 May 2020 • 14:00—15:00
  • Aamir Ahmad
  • remote talk on Zoom

In this talk I will present an overview and the latest results of the project Aerial Outdoor Motion Capture (AirCap), running at the Perceiving Systems department. AirCap's goal is to achieve markerless and unconstrained human motion capture (MoCap) in unknown and unstructured outdoor environments. To this end, we have developed a flying MoCap system using a team of autonomous aerial robots with on-board, monocular RGB cameras. Our system is endowed with a range of novel functionalities which was developed by our group over the last 3 years. These include, i) cooperative detection and tracking that enables DNN-based detectors on board flying robots, ii) active cooperative perception in aerial robot teams to minimize joint tracking uncertainty, and iii) markerless human pose and shape estimation using images acquired from multiple views and approximately calibrated cameras. We have conducted several real experiments along with ground truth comparisons to validate our system. Overall, for outdoor scenarios we have demonstrated the first fully autonomous flying MoCap system involving multiple aerial robots.

Organizers: Katherine J. Kuchenbecker


Deep inverse rendering in the wild

Talk
  • 15 May 2020 • 11:00—12:00
  • Will Smith
  • Remote talk on zoom

In this talk I will consider the problem of scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled, outdoor image. This task is highly ill-posed, but we show that multiview self-supervision, a natural lighting prior and implicit lighting estimation allow an image-to-image CNN to solve the task, seemingly learning some general principles of shape-from-shading along the way. Adding a neural renderer and sky generator GAN, our approach allows us to synthesise photorealistic relit images under widely varying illumination. I will finish by briefly describing recent work in which some of these ideas have been combined with deep face model fitting replacing parameter regression with correspondence prediction enabling fully unsupervised training.

Organizers: Timo Bolkart


  • Aayush Bansal
  • zoom

Licklider and Taylor (1968) envisioned computational machinery that could enable better communication between humans than face-to-face interaction. In the last fifty years, we have used computing to develop various means of communication, such as mail, messaging, phone calls, video conversation, and virtual reality. These are, however, a proxy of face-to-face communication that aims at encoding words, expressions, emotions, and body language at the source and decoding them reliably at the destination. The true revolution of personal computing has not begun yet because we have not been able to tap the real potential of computing for social communication. A computational machinery that can understand and create a four-dimensional audio-visual world can enable humans to describe their imagination and share it with others. In this talk, I will introduce the Computational Studio: an environment that allows non-specialists to construct and creatively edit the 4D audio-visual world from sparse audio and video samples. The Computational Studio aims to enable everyone to relive old memories through a form of virtual time travel, to automatically create new experiences, and share them with others using everyday computational devices. There are three essential components of the Computational Studio: (1) how can we capture 4D audio-visual world?; (2) how can we synthesize the audio-visual world using examples?; and (3) how can we interactively create and edit the audio-visual world? The first part of this talk introduces the work on capturing and browsing in-the-wild 4D audio-visual world in a self-supervised manner and efforts on building a multi-agent capture system. The applications of this work apply to social communication and to digitizing intangible cultural heritage, capturing tribal dances and wildlife in the natural environment, and understanding the social behavior of human beings. In the second part, I will talk about the example-based audio-visual synthesis in an unsupervised manner. Example-based audio-visual synthesis allows us to express ourselves easily. Finally, I will talk about the interactive visual synthesis that allows us to manually create and edit visual experiences. Here I will also stress the importance of thinking about a human user and computational devices when designing content creation applications. The Computational Studio is a first step towards unlocking the full degree of creative imagination, which is currently limited to the human mind by the limits of the individual's expressivity and skill. It has the potential to change the way we audio-visually communicate with others.

Organizers: Arjun Chandrasekaran Chun-Hao Paul Huang


Water anomalies: from ice age to carbon

Physics Colloquium
  • 12 May 2020 • 16:15—18:15
  • Marcia Babosa
  • WebEx

Technological advances in laser and vacuum technology have allowed realizing a dream of the early days of quantum mechanics: controlling single, laser-cooled atoms at a quantum level. Interfacing individual atoms with ultracold gases offer new experimental approaches to unsolved problems of nonequilibrium quantum physics. Moreover, such systems allow experimentally addressing the question if and how quantum properties can boost the performance of atomic-scale devices. In this talk, I will discuss how single atoms can be controlled and probed in an ultracold gas. Understanding the impurity-gas interaction at the atomic level allows employing inelastic spinexchange collisions, which are usually considered harmful, for quantum applications. First, I will show how the inelastic spin-exchange can map information about the gas temperature or the surrounding magnetic field to the quantum-spin distribution of single impurity atoms. Interestingly, the nonequilibrium spin dynamics before reaching the steady-state increases the sensitivity of the probe while reducing the perturbation of the gas compared to the steady-state. Second, I will discuss how the quantized energy transfer during inelastic collisions allows operating a single-atom quantum engine. We over-come the limitations imposed by using thermal states and run a quantum-enhanced Otto cycle operating at orders of magnitude larger powers compared to a thermal case, alternating between positive and negative temperature regimes at maximum efficiency. I will discuss the properties of the engine as well as limitations originating from the quantum aspects resulting in fluctuations of power.


The d-wave paradigm of unconventional superconductors

Physics Colloquium
  • 05 May 2020 • 16:15—18:15
  • Ronny Thomale
  • WebEx

As famously introduced in the context of copper oxide superconductors, Cooper pairing ofelectrons through a d-wave order parameter constitutes a central departure from theconventional microscopic picture of phonon-mediated s-wave superconductivity. Fordecades, copper oxide superconductors remained the predominant arena for d-wavepairing. In recent years, however, d-wave superconductivity witnesses significant diversification in terms of materials realizations, such as Na-doped cobaltates, pnictides atstrong hole doping, and, most recently, infinite layer nickelates as well as strontiumruthenate. In this colloquium, I intend to provide an overview over recents developments,and to work out a future perspective on new d-wave superconductors


Engineering single-atom devices in ultracold gases

Physics Colloquium
  • 28 April 2020 • 16:15—18:15
  • Artur Widera
  • WebEx (https://mpi-is.webex.com/mpi-is/onstage/g.php?MTID=e0a11930fa916a065c44064aea17abe75)

Technological advances in laser and vacuum technology have allowed realizing a dream of the early days of quantum mechanics: controlling single, laser-cooled atoms at a quantum level. Interfacing individual atoms with ultracold gases offer new experimental approaches to unsolved problems of nonequilibrium quantum physics. Moreover, such systems allow experimentally addressing the question if and how quantum properties can boost the performance of atomic-scale devices. In this talk, I will discuss how single atoms can be controlled and probed in an ultracold gas. Understanding the impurity-gas interaction at the atomic level allows employing inelastic spinexchange collisions, which are usually considered harmful, for quantum applications. First, I will show how the inelastic spin-exchange can map information about the gas temperature or the surrounding magnetic field to the quantum-spin distribution of single impurity atoms. Interestingly, the nonequilibrium spin dynamics before reaching the steady-state increases the sensitivity of the probe while reducing the perturbation of the gas compared to the steady-state. Second, I will discuss how the quantized energy transfer during inelastic collisions allows operating a single-atom quantum engine. We over-come the limitations imposed by using thermal states and run a quantum-enhanced Otto cycle operating at orders of magnitude larger powers compared to a thermal case, alternating between positive and negative temperature regimes at maximum efficiency. I will discuss the properties of the engine as well as limitations originating from the quantum aspects resulting in fluctuations of power.


Ultrafast Surface Dynamics and Local Spectroscopy at the Nanoscale

Physics Colloquium
  • 21 April 2020 • 16:15—18:15
  • Martin Wolf
  • WebEx

In a Born-Oppenheimer description, atomic motions evolve across a potential energy surface determined by the occupation of electronic states as a function of atom positions. Ultrafast photo-induced phase transitions provide a test case for how the forces and resulting nuclear motion along the reaction co-ordinate originate from a non-equilibrium population of excited electronic states. Here I discuss recent advances in time-resolved photoemission spectroscopy allowing for direct probing of the underlying fundamental steps and the transiently evolving band structure in the ultrafast phase transition in indium nanowires on Si(111) [1]. Furthermore, I will discuss some recent attempts to access the space-time limit in surface dynamics using local optical excitation of controlled plasmonic nano-junctions and tip-enhanced Raman scattering (TERS) [2,3].

Organizers: Joachim Gräfe


  • Chunyu Wang
  • remote talk on zoom

Accurate 3D human pose estimation has been a longstanding goal in computer vision. However, till now, it has only gained limited success in easy scenarios such as studios which have little occlusion. In this talk, I will present our two works aiming to address the occlusion problem in realistic scenarios. In the first work, we present an approach to recover absolute 3D human pose of single person from multi-view images by incorporating multi-view geometric priors in our model. It consists of two separate steps: (1) estimating the 2D poses in multi-view images and (2) recovering the 3D poses from the multi-view 2D poses. First, we introduce a cross-view fusion scheme into CNN to jointly estimate 2D poses for multiple views. Consequently, the 2D pose estimation for each view already benefits from other views. Second, we present a recursive Pictorial Structure Model to recover the 3D pose from the multi-view 2D poses. It gradually improves the accuracy of 3D pose with affordable computational cost. In the second work, we present a 3D pose estimator which allows us to reliably estimate and track people in crowded scenes. In contrast to the previous efforts which require to establish cross-view correspondence based on noisy and incomplete 2D pose estimations, we present an end-to-end solution which directly operates in the 3D space, therefore avoids making incorrect hard decisions in the 2D space. To achieve this goal, the features in all camera views are warped and aggregated in a common 3D space, and fed to Cuboid Proposal Network (CPN) to coarsely localize all people. Then we propose Pose Regression Network (PRN) to estimate a detailed 3D pose for each proposal. The approach is robust to occlusion which occurs frequently in practice. Without bells and whistles, it significantly outperforms the state-of-the-arts on the benchmark datasets.

Organizers: Chun-Hao Paul Huang


How to tie an optical field into a knot

Physics Colloquium
  • 14 April 2020 • 16:15—18:15
  • Mark Dennis
  • WebEx

Tying a knot in a piece of string can be a hard practical problem.