Header logo is


2016


Thumb xl smpl
Skinned multi-person linear model

Black, M.J., Loper, M., Mahmood, N., Pons-Moll, G., Romero, J.

December 2016, Application PCT/EP2016/064610 (misc)

Abstract
The invention comprises a learned model of human body shape and pose dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity- dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. The invention quantitatively evaluates variants of SMPL using linear or dual- quaternion blend skinning and show that both are more accurate than a Blend SCAPE model trained on the same data. In a further embodiment, the invention realistically models dynamic soft-tissue deformations. Because it is based on blend skinning, SMPL is compatible with existing rendering engines and we make it available for research purposes.

ps

Google Patents [BibTex]

2016


Google Patents [BibTex]


Thumb xl toc image
Wireless actuation with functional acoustic surfaces

Qiu, T., Palagi, S., Mark, A. G., Melde, K., Adams, F., Fischer, P.

Appl. Phys. Lett., 109(19):191602, November 2016, APL Editor's pick. APL News. (article)

Abstract
Miniaturization calls for micro-actuators that can be powered wirelessly and addressed individually. Here, we develop functional surfaces consisting of arrays of acoustically resonant microcavities, and we demonstrate their application as two-dimensional wireless actuators. When remotely powered by an acoustic field, the surfaces provide highly directional propulsive forces in fluids through acoustic streaming. A maximal force of similar to 0.45mN is measured on a 4 x 4 mm(2) functional surface. The response of the surfaces with bubbles of different sizes is characterized experimentally. This shows a marked peak around the micro-bubbles' resonance frequency, as estimated by both an analytical model and numerical simulations. The strong frequency dependence can be exploited to address different surfaces with different acoustic frequencies, thus achieving wireless actuation with multiple degrees of freedom. The use of the functional surfaces as wireless ready-to-attach actuators is demonstrated by implementing a wireless and bidirectional miniaturized rotary motor, which is 2.6 x 2.6 x 5 mm(3) in size and generates a stall torque of similar to 0.5 mN.mm. The adoption of micro-structured surfaces as wireless actuators opens new possibilities in the development of miniaturized devices and tools for fluidic environments that are accessible by low intensity ultrasound fields.

pf

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl toc image
Nanomotors

Alarcon-Correa, M., Walker (Schamel), D., Qiu, T., Fischer, P.

Eur. Phys. J.-Special Topics, 225(11-12):2241-2254, November 2016 (article)

Abstract
This minireview discusses whether catalytically active macromolecules and abiotic nanocolloids, that are smaller than motile bacteria, can self-propel. Kinematic reversibility at low Reynolds number demands that self-propelling colloids must break symmetry. Methods that permit the synthesis and fabrication of Janus nanocolloids are therefore briefly surveyed, as well as means that permit the analysis of the nanocolloids' motion. Finally, recent work is reviewed which shows that nanoagents are small enough to penetrate the complex inhomogeneous polymeric network of biological fluids and gels, which exhibit diverse rheological behaviors.

pf

DOI [BibTex]

DOI [BibTex]


Thumb xl psychscience
Creating body shapes from verbal descriptions by linking similarity spaces

Hill, M. Q., Streuber, S., Hahn, C. A., Black, M. J., O’Toole, A. J.

Psychological Science, 27(11):1486-1497, November 2016, (article)

Abstract
Brief verbal descriptions of bodies (e.g. curvy, long-legged) can elicit vivid mental images. The ease with which we create these mental images belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and show that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2094 bodies. This allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape, capturing perceptually salient global and local body features.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl toc image
Structured light enables biomimetic swimming and versatile locomotion of photoresponsive soft microrobots

Palagi, S., Mark, A. G., Reigh, S. Y., Melde, K., Qiu, T., Zeng, H., Parmeggiani, C., Martella, D., Sanchez-Castillo, A., Kapernaum, N., Giesselmann, F., Wiersma, D. S., Lauga, E., Fischer, P.

Nature Materials, 15(6):647–653, November 2016, Max Planck press release, Nature News & Views. (article)

Abstract
Microorganisms move in challenging environments by periodic changes in body shape. In contrast, current artificial microrobots cannot actively deform, exhibiting at best passive bending under external fields. Here, by taking advantage of the wireless, scalable and spatiotemporally selective capabilities that light allows, we show that soft microrobots consisting of photoactive liquid-crystal elastomers can be driven by structured monochromatic light to perform sophisticated biomimetic motions. We realize continuum yet selectively addressable artificial microswimmers that generate travelling-wave motions to self-propel without external forces or torques, as well as microrobots capable of versatile locomotion behaviours on demand. Both theoretical predictions and experimental results confirm that multiple gaits, mimicking either symplectic or antiplectic metachrony of ciliate protozoa, can be achieved with single microswimmers. The principle of using structured light can be extended to other applications that require microscale actuation with sophisticated spatiotemporal coordination for advanced microrobotic technologies.

pf

Video - Soft photo Micro-Swimmer DOI [BibTex]

Video - Soft photo Micro-Swimmer DOI [BibTex]


Thumb xl smplify
Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image

Bogo, F., Kanazawa, A., Lassner, C., Gehler, P., Romero, J., Black, M. J.

In Computer Vision – ECCV 2016, pages: 561-578, Lecture Notes in Computer Science, Springer International Publishing, 14th European Conference on Computer Vision, October 2016 (inproceedings)

Abstract
We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we fi rst use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fi t it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.

ps

pdf Video Sup Mat video Code Project Project Page [BibTex]

pdf Video Sup Mat video Code Project Project Page [BibTex]


Thumb xl gadde
Superpixel Convolutional Networks using Bilateral Inceptions

Gadde, R., Jampani, V., Kiefel, M., Kappler, D., Gehler, P.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, Springer, 14th European Conference on Computer Vision, October 2016 (inproceedings)

Abstract
In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new “bilateral inception” module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception modules between the last CNN (1 × 1 convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.

am ps

pdf supplementary poster Project Page Project Page [BibTex]

pdf supplementary poster Project Page Project Page [BibTex]


Thumb xl thumb
Barrista - Caffe Well-Served

Lassner, C., Kappler, D., Kiefel, M., Gehler, P.

In ACM Multimedia Open Source Software Competition, ACM OSSC16, October 2016 (inproceedings)

Abstract
The caffe framework is one of the leading deep learning toolboxes in the machine learning and computer vision community. While it offers efficiency and configurability, it falls short of a full interface to Python. With increasingly involved procedures for training deep networks and reaching depths of hundreds of layers, creating configuration files and keeping them consistent becomes an error prone process. We introduce the barrista framework, offering full, pythonic control over caffe. It separates responsibilities and offers code to solve frequently occurring tasks for pre-processing, training and model inspection. It is compatible to all caffe versions since mid 2015 and can import and export .prototxt files. Examples are included, e.g., a deep residual network implemented in only 172 lines (for arbitrary depths), comparing to 2320 lines in the official implementation for the equivalent model.

am ps

pdf link (url) DOI Project Page [BibTex]

pdf link (url) DOI Project Page [BibTex]


Thumb xl toc image
Capture of 2D Microparticle Arrays via a UV-Triggered Thiol-yne “Click” Reaction

Walker (Schamel), D., Singh, D. P., Fischer, P.

Advanced Materials, 28(44):9846-9850, September 2016 (article)

Abstract
Immobilization of colloidal assemblies onto solid supports via a fast UV-triggered click-reaction is achieved. Transient assemblies of microparticles and colloidal materials can be captured and transferred to solid supports. The technique does not require complex reaction conditions, and is compatible with a variety of particle assembly methods.

pf

DOI [BibTex]


Thumb xl toc image
Magnesium plasmonics for UV applications and chiral sensing

Jeong, H. H., Mark, A. G., Fischer, P.

Chem. Comm., 52(82):12179-12182, September 2016 (article)

Abstract
We demonstrate that chiral magnesium nanoparticles show remarkable plasmonic extinction- and chiroptical-effects in the ultraviolet region. The Mg nanohelices possess an enhanced local surface plasmon resonance (LSPR) sensitivity due to the strong dispersion of most substances in the UV region.

pf

DOI [BibTex]

DOI [BibTex]


Thumb xl cover nature 1j 00008
Holograms for acoustics

Melde, K., Mark, A. G., Qiu, T., Fischer, P.

Nature, 537, pages: 518-522, September 2016, Max Planck press release, Nature News & Views, Nature Video. (article)

Abstract
Holographic techniques are fundamental to applications such as volumetric displays(1), high-density data storage and optical tweezers that require spatial control of intricate optical(2) or acoustic fields(3,4) within a three-dimensional volume. The basis of holography is spatial storage of the phase and/or amplitude profile of the desired wavefront(5,6) in a manner that allows that wavefront to be reconstructed by interference when the hologram is illuminated with a suitable coherent source. Modern computer-generated holography(7) skips the process of recording a hologram from a physical scene, and instead calculates the required phase profile before rendering it for reconstruction. In ultrasound applications, the phase profile is typically generated by discrete and independently driven ultrasound sources(3,4,8-12); however, these can only be used in small numbers, which limits the complexity or degrees of freedom that can be attained in the wavefront. Here we introduce monolithic acoustic holograms, which can reconstruct diffraction-limited acoustic pressure fields and thus arbitrary ultrasound beams. We use rapid fabrication to craft the holograms and achieve reconstruction degrees of freedom two orders of magnitude higher than commercial phased array sources. The technique is inexpensive, appropriate for both transmission and reflection elements, and scales well to higher information content, larger aperture size and higher power. The complex three-dimensional pressure and phase distributions produced by these acoustic holograms allow us to demonstrate new approaches to controlled ultrasonic manipulation of solids in water, and of liquids and solids in air. We expect that acoustic holograms will enable new capabilities in beam-steering and the contactless transfer of power, improve medical imaging, and drive new applications of ultrasound.

pf

Video - Holograms for Sound DOI Project Page [BibTex]

Video - Holograms for Sound DOI Project Page [BibTex]


Thumb xl toc image
A loop-gap resonator for chirality-sensitive nuclear magneto-electric resonance (NMER)

Garbacz, P., Fischer, P., Kraemer, S.

J. Chem. Phys., 145(10):104201, September 2016 (article)

Abstract
Direct detection of molecular chirality is practically impossible by methods of standard nuclear magnetic resonance (NMR) that is based on interactions involving magnetic-dipole and magnetic-field operators. However, theoretical studies provide a possible direct probe of chirality by exploiting an enantiomer selective additional coupling involving magnetic-dipole, magnetic-field, and electric field operators. This offers a way for direct experimental detection of chirality by nuclear magneto-electric resonance (NMER). This method uses both resonant magnetic and electric radiofrequency (RF) fields. The weakness of the chiral interaction though requires a large electric RF field and a small transverse RF magnetic field over the sample volume, which is a non-trivial constraint. In this study, we present a detailed study of the NMER concept and a possible experimental realization based on a loop-gap resonator. For this original device, the basic principle and numerical studies as well as fabrication and measurements of the frequency dependence of the scattering parameter are reported. By simulating the NMER spin dynamics for our device and taking the F-19 NMER signal of enantiomer-pure 1,1,1-trifluoropropan-2-ol, we predict a chirality induced NMER signal that accounts for 1%-5% of the standard achiral NMR signal. Published by AIP Publishing.

pf

DOI [BibTex]

DOI [BibTex]


Thumb xl fig1
Soft continuous microrobots with multiple intrinsic degrees of freedom

Palagi, S., Mark, A. G., Melde, K., Zeng, H., Parmeggiani, C., Martella, D., Wiersma, D. S., Fischer, P.

In 2016 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), pages: 1-5, July 2016 (inproceedings)

Abstract
One of the main challenges in the development of microrobots, i.e. robots at the sub-millimeter scale, is the difficulty of adopting traditional solutions for power, control and, especially, actuation. As a result, most current microrobots are directly manipulated by external fields, and possess only a few passive degrees of freedom (DOFs). We have reported a strategy that enables embodiment, remote powering and control of a large number of DOFs in mobile soft microrobots. These consist of photo-responsive materials, such that the actuation of their soft continuous body can be selectively and dynamically controlled by structured light fields. Here we use finite-element modelling to evaluate the effective number of DOFs that are addressable in our microrobots. We also demonstrate that by this flexible approach different actuation patterns can be obtained, and thus different locomotion performances can be achieved within the very same microrobot. The reported results confirm the versatility of the proposed approach, which allows for easy application-specific optimization and online reconfiguration of the microrobot's behavior. Such versatility will enable advanced applications of robotics and automation at the micro scale.

pf

DOI [BibTex]

DOI [BibTex]


Thumb xl screen shot 2016 07 25 at 13.52.05
Non-parametric Models for Structured Data and Applications to Human Bodies and Natural Scenes

Lehrmann, A.

ETH Zurich, July 2016 (phdthesis)

Abstract
The purpose of this thesis is the study of non-parametric models for structured data and their fields of application in computer vision. We aim at the development of context-sensitive architectures which are both expressive and efficient. Our focus is on directed graphical models, in particular Bayesian networks, where we combine the flexibility of non-parametric local distributions with the efficiency of a global topology with bounded treewidth. A bound on the treewidth is obtained by either constraining the maximum indegree of the underlying graph structure or by introducing determinism. The non-parametric distributions in the nodes of the graph are given by decision trees or kernel density estimators. The information flow implied by specific network topologies, especially the resultant (conditional) independencies, allows for a natural integration and control of contextual information. We distinguish between three different types of context: static, dynamic, and semantic. In four different approaches we propose models which exhibit varying combinations of these contextual properties and allow modeling of structured data in space, time, and hierarchies derived thereof. The generative character of the presented models enables a direct synthesis of plausible hypotheses. Extensive experiments validate the developed models in two application scenarios which are of particular interest in computer vision: human bodies and natural scenes. In the practical sections of this work we discuss both areas from different angles and show applications of our models to human pose, motion, and segmentation as well as object categorization and localization. Here, we benefit from the availability of modern datasets of unprecedented size and diversity. Comparisons to traditional approaches and state-of-the-art research on the basis of well-established evaluation criteria allows the objective assessment of our contributions.

ps

pdf [BibTex]


Thumb xl toc image
Active Nanorheology with Plasmonics

Jeong, H. H., Mark, A. G., Lee, T., Alarcon-Correa, M., Eslami, S., Qiu, T., Gibbs, J. G., Fischer, P.

Nano Letters, 16(8):4887-4894, July 2016 (article)

Abstract
Nanoplasmonic systems are valued for their strong optical response and their small size. Most plasmonic sensors and systems to date have been rigid and passive. However, rendering these structures dynamic opens new possibilities for applications. Here we demonstrate that dynamic plasmonic nanoparticles can be used as mechanical sensors to selectively probe the rheological properties of a fluid in situ at the nanoscale and in microscopic volumes. We fabricate chiral magneto-plasmonic nanocolloids that can be actuated by an external magnetic field, which in turn allows for the direct and fast modulation of their distinct optical response. The method is robust and allows nanorheological measurements with a mechanical sensitivity of similar to 0.1 cP, even in strongly absorbing fluids with an optical density of up to OD similar to 3 (similar to 0.1% light transmittance) and in the presence of scatterers (e.g., 50% v/v red blood cells).

pf

DOI [BibTex]

DOI [BibTex]


Thumb xl marss2016
Wireless actuator based on ultrasonic bubble streaming

Qiu, T., Palagi, S., Mark, A. G., Melde, K., Fischer, P.

In 2016 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), pages: 1-5, July 2016 (inproceedings)

Abstract
Miniaturized actuators are a key element for the manipulation and automation at small scales. Here, we propose a new miniaturized actuator, which consists of an array of micro gas bubbles immersed in a fluid. Under ultrasonic excitation, the oscillation of micro gas bubbles results in acoustic streaming and provides a propulsive force that drives the actuator. The actuator was fabricated by lithography and fluidic streaming was observed under ultrasound excitation. Theoretical modelling and numerical simulations were carried out to show that lowing the surface tension results in a larger amplitude of the bubble oscillation, and thus leads to a higher propulsive force. Experimental results also demonstrate that the propulsive force increases 3.5 times when the surface tension is lowered by adding a surfactant. An actuator with a 4×4 mm 2 surface area provides a driving force of about 0.46 mN, suggesting that it is possible to be used as a wireless actuator for small-scale robots and medical instruments.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl webteaser
Body Talk: Crowdshaping Realistic 3D Avatars with Words

Streuber, S., Quiros-Ramirez, M. A., Hill, M. Q., Hahn, C. A., Zuffi, S., O’Toole, A., Black, M. J.

ACM Trans. Graph. (Proc. SIGGRAPH), 35(4):54:1-54:14, July 2016 (article)

Abstract
Realistic, metrically accurate, 3D human avatars are useful for games, shopping, virtual reality, and health applications. Such avatars are not in wide use because solutions for creating them from high-end scanners, low-cost range cameras, and tailoring measurements all have limitations. Here we propose a simple solution and show that it is surprisingly accurate. We use crowdsourcing to generate attribute ratings of 3D body shapes corresponding to standard linguistic descriptions of 3D shape. We then learn a linear function relating these ratings to 3D human shape parameters. Given an image of a new body, we again turn to the crowd for ratings of the body shape. The collection of linguistic ratings of a photograph provides remarkably strong constraints on the metric 3D shape. We call the process crowdshaping and show that our Body Talk system produces shapes that are perceptually indistinguishable from bodies created from high-resolution scans and that the metric accuracy is sufficient for many tasks. This makes body “scanning” practical without a scanner, opening up new applications including database search, visualization, and extracting avatars from books.

ps

pdf web tool video talk (ppt) [BibTex]

pdf web tool video talk (ppt) [BibTex]


Thumb xl teaser
DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation

Pishchulin, L., Insafutdinov, E., Tang, S., Andres, B., Andriluka, M., Gehler, P., Schiele, B.

In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 4929-4937, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
This paper considers the task of articulated human pose estimation of multiple people in real-world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.

ps

code pdf supplementary DOI Project Page [BibTex]

code pdf supplementary DOI Project Page [BibTex]


Thumb xl tsaiteaser
Video segmentation via object flow

Tsai, Y., Yang, M., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Video object segmentation is challenging due to fast moving objects, deforming shapes, and cluttered backgrounds. Optical flow can be used to propagate an object segmentation over time but, unfortunately, flow is often inaccurate, particularly around object boundaries. Such boundaries are precisely where we want our segmentation to be accurate. To obtain accurate segmentation across time, we propose an efficient algorithm that considers video segmentation and optical flow estimation simultaneously. For video segmentation, we formulate a principled, multiscale, spatio-temporal objective function that uses optical flow to propagate information between frames. For optical flow estimation, particularly at object boundaries, we compute the flow independently in the segmented regions and recompose the results. We call the process object flow and demonstrate the effectiveness of jointly optimizing optical flow and video segmentation using an iterative scheme. Experiments on the SegTrack v2 and Youtube-Objects datasets show that the proposed algorithm performs favorably against the other state-of-the-art methods.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl capital
Patches, Planes and Probabilities: A Non-local Prior for Volumetric 3D Reconstruction

Ulusoy, A. O., Black, M. J., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.

avg ps

YouTube pdf poster suppmat Project Page [BibTex]

YouTube pdf poster suppmat Project Page [BibTex]


Thumb xl ijcv tumb
Capturing Hands in Action using Discriminative Salient Points and Physics Simulation

Tzionas, D., Ballan, L., Srikantha, A., Aponte, P., Pollefeys, M., Gall, J.

International Journal of Computer Vision (IJCV), 118(2):172-193, June 2016 (article)

Abstract
Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.

ps

Website pdf link (url) DOI Project Page [BibTex]

Website pdf link (url) DOI Project Page [BibTex]


Thumb xl header
Optical Flow with Semantic Segmentation and Localized Layers

Sevilla-Lara, L., Sun, D., Jampani, V., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 3889-3898, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Existing optical flow methods make generic, spatially homogeneous, assumptions about the spatial structure of the flow. In reality, optical flow varies across an image depending on object class. Simply put, different objects move differently. Here we exploit recent advances in static semantic scene segmentation to segment the image into objects of different types. We define different models of image motion in these regions depending on the type of object. For example, we model the motion on roads with homographies, vegetation with spatially smooth flow, and independently moving objects like cars and planes with affine motion plus deviations. We then pose the flow estimation problem using a novel formulation of localized layers, which addresses limitations of traditional layered models for dealing with complex scene motion. Our semantic flow method achieves the lowest error of any published monocular method in the KITTI-2015 flow benchmark and produces qualitatively better flow and segmentation than recent top methods on a wide range of natural videos.

ps

video Kitti Precomputed Data (1.6GB) pdf YouTube Sequences Code Project Page Project Page [BibTex]

video Kitti Precomputed Data (1.6GB) pdf YouTube Sequences Code Project Page Project Page [BibTex]


Thumb xl tes cvpr16 bilateral
Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks

Jampani, V., Kiefel, M., Gehler, P. V.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 4452-4461, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of high-dimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters.

ps

project page code CVF open-access pdf supplementary poster Project Page Project Page [BibTex]

project page code CVF open-access pdf supplementary poster Project Page Project Page [BibTex]


Thumb xl futeaser
Occlusion boundary detection via deep exploration of context

Fu, H., Wang, C., Tao, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Occlusion boundaries contain rich perceptual information about the underlying scene structure. They also provide important cues in many visual perception tasks such as scene understanding, object recognition, and segmentation. In this paper, we improve occlusion boundary detection via enhanced exploration of contextual information (e.g., local structural boundary patterns, observations from surrounding regions, and temporal context), and in doing so develop a novel approach based on convolutional neural networks (CNNs) and conditional random fields (CRFs). Experimental results demonstrate that our detector significantly outperforms the state-of-the-art (e.g., improving the F-measure from 0.62 to 0.71 on the commonly used CMU benchmark). Last but not least, we empirically assess the roles of several important components of the proposed detector, so as to validate the rationale behind this approach.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl jun teaser
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

avg ps

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Thumb xl toc imag
Auxetic Metamaterial Simplifies Soft Robot Design

Mark, A. G., Palagi, S., Qiu, T., Fischer, P.

In 2016 IEEE Int. Conf. on Robotics and Automation (ICRA), pages: 4951-4956, May 2016 (inproceedings)

Abstract
Soft materials are being adopted in robotics in order to facilitate biomedical applications and in order to achieve simpler and more capable robots. One route to simplification is to design the robot's body using `smart materials' that carry the burden of control and actuation. Metamaterials enable just such rational design of the material properties. Here we present a soft robot that exploits mechanical metamaterials for the intrinsic synchronization of two passive clutches which contact its travel surface. Doing so allows it to move through an enclosed passage with an inchworm motion propelled by a single actuator. Our soft robot consists of two 3D-printed metamaterials that implement auxetic and normal elastic properties. The design, fabrication and characterization of the metamaterials are described. In addition, a working soft robot is presented. Since the synchronization mechanism is a feature of the robot's material body, we believe that the proposed design will enable compliant and robust implementations that scale well with miniaturization.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl spie2016
Towards Photo-Induced Swimming: Actuation of Liquid Crystalline Elastomer in Water

cerretti, G., Martella, D., Zeng, H., Parmeggiani, C., Palagi, S., Mark, A. G., Melde, K., Qiu, T., Fischer, P., Wiersma, D.

In Proc. of SPIE 9738, pages: Laser 3D Manufacturing III, 97380T, April 2016 (inproceedings)

Abstract
Liquid Crystalline Elastomers (LCEs) are very promising smart materials that can be made sensitive to different external stimuli, such as heat, pH, humidity and light, by changing their chemical composition. In this paper we report the implementation of a nematically aligned LCE actuator able to undergo large light-induced deformations. We prove that this property is still present even when the actuator is submerged in fresh water. Thanks to the presence of azo-dye moieties, capable of going through a reversible trans-cis photo-isomerization, and by applying light with two different wavelengths we managed to control the bending of such actuator in the liquid environment. The reported results represent the first step towards swimming microdevices powered by light.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl toc image
Dispersion and shape engineered plasmonic nanosensors

Jeong, H. H., Mark, A. G., Alarcon-Correa, M., Kim, I., Oswald, P., Lee, T. C., Fischer, P.

Nature Communications, 7, pages: 11331, March 2016 (article)

Abstract
Biosensors based on the localized surface plasmon resonance (LSPR) of individual metallic nanoparticles promise to deliver modular, low-cost sensing with high-detection thresholds. However, they continue to suffer from relatively low sensitivity and figures of merit (FOMs). Herein we introduce the idea of sensitivity enhancement of LSPR sensors through engineering of the material dispersion function. Employing dispersion and shape engineering of chiral nanoparticles leads to remarkable refractive index sensitivities (1,091 nmRIU(-1) at lambda = 921 nm) and FOMs (>2,800 RIU-1). A key feature is that the polarization-dependent extinction of the nanoparticles is now characterized by rich spectral features, including bipolar peaks and nulls, suitable for tracking refractive index changes. This sensing modality offers strong optical contrast even in the presence of highly absorbing media, an important consideration for use in complex biological media with limited transmission. The technique is sensitive to surface-specific binding events which we demonstrate through biotin-avidin surface coupling.

pf

link (url) DOI [BibTex]


Thumb xl appealingavatarsbig
Appealing female avatars from 3D body scans: Perceptual effects of stylization

Fleming, R., Mohler, B., Romero, J., Black, M. J., Breidt, M.

In 11th Int. Conf. on Computer Graphics Theory and Applications (GRAPP), Febuary 2016 (inproceedings)

Abstract
Advances in 3D scanning technology allow us to create realistic virtual avatars from full body 3D scan data. However, negative reactions to some realistic computer generated humans suggest that this approach might not always provide the most appealing results. Using styles derived from existing popular character designs, we present a novel automatic stylization technique for body shape and colour information based on a statistical 3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived appeal with two different experiments: One focuses on body shape alone, the other investigates the additional role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was most effective, increasing average appeal ratings by approximately 34%.

ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Thumb xl toc image
Magnetic Propulsion of Microswimmers with DNA-Based Flagellar Bundles

Maier, A. M., Weig, C., Oswald, P., Frey, E., Fischer, P., Liedl, T.

Nano Letters, 16(2):906-910, January 2016 (article)

Abstract
We show that DNA-based self-assembly can serve as a general and flexible tool to construct artificial flagella of several micrometers in length and only tens of nanometers in diameter. By attaching the DNA flagella to biocompatible magnetic microparticles, we provide a proof of concept demonstration of hybrid structures that, when rotated in an external magnetic field, propel by means of a flagellar bundle, similar to self-propelling peritrichous bacteria. Our theoretical analysis predicts that flagellar bundles that possess a length-dependent bending stiffness should exhibit a superior swimming speed compared to swimmers with a single appendage. The DNA self-assembly method permits the realization of these improved flagellar bundles in good agreement with our quantitative model. DNA flagella with well-controlled shape could fundamentally increase the functionality of fully biocompatible nanorobots and extend the scope and complexity of active materials.

pf

DOI [BibTex]

DOI [BibTex]


Thumb xl teaser web
Human Pose Estimation from Video and IMUs

Marcard, T. V., Pons-Moll, G., Rosenhahn, B.

Transactions on Pattern Analysis and Machine Intelligence PAMI, 38(8):1533-1547, January 2016 (article)

ps

data pdf dataset_documentation [BibTex]

data pdf dataset_documentation [BibTex]


Thumb xl teaser
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

avg ps

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]


Thumb xl sabteaser
Perceiving Systems (2011-2015)
Scientific Advisory Board Report, 2016 (misc)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl siyong
Shape estimation of subcutaneous adipose tissue using an articulated statistical shape model

Yeo, S. Y., Romero, J., Loper, M., Machann, J., Black, M.

Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 0(0):1-8, 2016 (article)

ps

publisher website preprint pdf link (url) DOI Project Page [BibTex]

publisher website preprint pdf link (url) DOI Project Page [BibTex]


Thumb xl toc image patent
Method for encapsulating a nanostructure, coated nanostructure and use of a coated nanostructure

Jeong, H. H., Lee, T. C., Fischer, P.

Google Patents, 2016, WO Patent App. PCT/EP2016/056,377 (patent)

Abstract
The present invention relates to a method for encapsulating a nanostructure, the method comprising the steps of: -providing a substrate; -forming a plug composed of plug material at said substrate; -forming a nanostructure (on or) at said plug; -forming a shell composed of at least one shell material on external surfaces of the nanostructure, with the at least one shell material covering said nanostructure and at least some of the plug material,whereby the shell and the plug encapsulate the nanostructure. The invention further relates to a coated nanostructure and to the use of a coated nanostructure.

pf

link (url) [BibTex]


Thumb xl website thumbnail
Reconstructing Articulated Rigged Models from RGB-D Videos

Tzionas, D., Gall, J.

In European Conference on Computer Vision Workshops 2016 (ECCVW’16) - Workshop on Recovering 6D Object Pose (R6D’16), pages: 620-633, Springer International Publishing, 2016 (inproceedings)

Abstract
Although commercial and open-source software exist to reconstruct a static object from a sequence recorded with an RGB-D sensor, there is a lack of tools that build rigged models of articulated objects that deform realistically and can be used for tracking or animation. In this work, we fill this gap and propose a method that creates a fully rigged model of an articulated object from depth data of a single sensor. To this end, we combine deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. The fully rigged model then consists of a watertight mesh, embedded skeleton, and skinning weights.

ps

pdf suppl Project's Website YouTube link (url) DOI [BibTex]

pdf suppl Project's Website YouTube link (url) DOI [BibTex]


Thumb xl screen shot 2016 02 22 at 11.46.41
The GRASP Taxonomy of Human Grasp Types

Feix, T., Romero, J., Schmiedmayer, H., Dollar, A., Kragic, D.

Human-Machine Systems, IEEE Transactions on, 46(1):66-77, 2016 (article)

ps

publisher website pdf DOI Project Page [BibTex]

publisher website pdf DOI Project Page [BibTex]


Thumb xl pami
Map-Based Probabilistic Visual Self-Localization

Brubaker, M. A., Geiger, A., Urtasun, R.

IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 2016 (article)

Abstract
Accurate and efficient self-localization is a critical problem for autonomous systems. This paper describes an affordable solution to vehicle self-localization which uses odometry computed from two video cameras and road maps as the sole inputs. The core of the method is a probabilistic model for which an efficient approximate inference algorithm is derived. The inference algorithm is able to utilize distributed computation in order to meet the real-time requirements of autonomous systems in some instances. Because of the probabilistic nature of the model the method is capable of coping with various sources of uncertainty including noise in the visual odometry and inherent ambiguities in the map (e.g., in a Manhattan world). By exploiting freely available, community developed maps and visual odometry measurements, the proposed method is able to localize a vehicle to 4m on average after 52 seconds of driving on maps which contain more than 2,150km of drivable roads.

avg ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Thumb xl thumb 9780262028370
Advanced Structured Prediction

Nowozin, S., Gehler, P. V., Jancsary, J., Lampert, C. H.

Advanced Structured Prediction, pages: 432, Neural Information Processing Series, MIT Press, November 2014 (book)

Abstract
The goal of structured prediction is to build machine learning models that predict relational information that itself has structure, such as being composed of multiple interrelated parts. These models, which reflect prior knowledge, task-specific relations, and constraints, are used in fields including computer vision, speech recognition, natural language processing, and computational biology. They can carry out such tasks as predicting a natural language sentence, or segmenting an image into meaningful components. These models are expressive and powerful, but exact computation is often intractable. A broad research effort in recent years has aimed at designing structured prediction models and approximate inference and learning procedures that are computationally efficient. This volume offers an overview of this recent research in order to make the work accessible to a broader research community. The chapters, by leading researchers in the field, cover a range of topics, including research trends, the linear programming relaxation approach, innovations in probabilistic modeling, recent theoretical progress, and resource-aware learning.

ps

publisher link (url) [BibTex]

publisher link (url) [BibTex]


Thumb xl mosh heroes icon
MoSh: Motion and Shape Capture from Sparse Markers

Loper, M. M., Mahmood, N., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 33(6):220:1-220:13, ACM, New York, NY, USA, November 2014 (article)

Abstract
Marker-based motion capture (mocap) is widely criticized as producing lifeless animations. We argue that important information about body surface motion is present in standard marker sets but is lost in extracting a skeleton. We demonstrate a new approach called MoSh (Motion and Shape capture), that automatically extracts this detail from mocap data. MoSh estimates body shape and pose together using sparse marker data by exploiting a parametric model of the human body. In contrast to previous work, MoSh solves for the marker locations relative to the body and estimates accurate body shape directly from the markers without the use of 3D scans; this effectively turns a mocap system into an approximate body scanner. MoSh is able to capture soft tissue motions directly from markers by allowing body shape to vary over time. We evaluate the effect of different marker sets on pose and shape accuracy and propose a new sparse marker set for capturing soft-tissue motion. We illustrate MoSh by recovering body shape, pose, and soft-tissue motion from archival mocap data and using this to produce animations with subtlety and realism. We also show soft-tissue motion retargeting to new characters and show how to magnify the 3D deformations of soft tissue to create animations with appealing exaggerations.

ps

pdf video data pdf from publisher link (url) DOI Project Page Project Page Project Page [BibTex]

pdf video data pdf from publisher link (url) DOI Project Page Project Page Project Page [BibTex]


Thumb xl thumb grouped teaser
Hough-based Object Detection with Grouped Features

Srikantha, A., Gall, J.

International Conference on Image Processing, pages: 1653-1657, Paris, France, IEEE International Conference on Image Processing , October 2014 (conference)

Abstract
Hough-based voting approaches have been successfully applied to object detection. While these methods can be efficiently implemented by random forests, they estimate the probability for an object hypothesis for each feature independently. In this work, we address this problem by grouping features in a local neighborhood to obtain a better estimate of the probability. To this end, we propose oblique classification-regression forests that combine features of different trees. We further investigate the benefit of combining independent and grouped features and evaluate the approach on RGB and RGB-D datasets.

ps

pdf poster DOI Project Page [BibTex]

pdf poster DOI Project Page [BibTex]


Thumb xl thumb schoenbein2014iros
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb xl sap copy
Can I recognize my body’s weight? The influence of shape and texture on the perception of self

Piryankova, I., Stefanucci, J., Romero, J., de la Rosa, S., Black, M., Mohler, B.

ACM Transactions on Applied Perception for the Symposium on Applied Perception, 11(3):13:1-13:18, September 2014 (article)

Abstract
The goal of this research was to investigate women’s sensitivity to changes in their perceived weight by altering the body mass index (BMI) of the participants’ personalized avatars displayed on a large-screen immersive display. We created the personalized avatars with a full-body 3D scanner that records both the participants’ body geometry and texture. We altered the weight of the personalized avatars to produce changes in BMI while keeping height, arm length and inseam fixed and exploited the correlation between body geometry and anthropometric measurements encapsulated in a statistical body shape model created from thousands of body scans. In a 2x2 psychophysical experiment, we investigated the relative importance of visual cues, namely shape (own shape vs. an average female body shape with equivalent height and BMI to the participant) and texture (own photo-realistic texture or checkerboard pattern texture) on the ability to accurately perceive own current body weight (by asking them ‘Is the avatar the same weight as you?’). Our results indicate that shape (where height and BMI are fixed) had little effect on the perception of body weight. Interestingly, the participants perceived their body weight veridically when they saw their own photo-realistic texture and significantly underestimated their body weight when the avatar had a checkerboard patterned texture. The range that the participants accepted as their own current weight was approximately a 0.83 to −6.05 BMI% change tolerance range around their perceived weight. Both the shape and the texture had an effect on the reported similarity of the body parts and the whole avatar to the participant’s body. This work has implications for new measures for patients with body image disorders, as well as researchers interested in creating personalized avatars for games, training applications or virtual reality.

ps

pdf DOI Project Page Project Page [BibTex]

pdf DOI Project Page Project Page [BibTex]


Thumb xl fop
Human Pose Estimation with Fields of Parts

Kiefel, M., Gehler, P.

In Computer Vision – ECCV 2014, LNCS 8693, pages: 331-346, Lecture Notes in Computer Science, (Editors: Fleet, David and Pajdla, Tomas and Schiele, Bernt and Tuytelaars, Tinne), Springer, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
This paper proposes a new formulation of the human pose estimation problem. We present the Fields of Parts model, a binary Conditional Random Field model designed to detect human body parts of articulated people in single images. The Fields of Parts model is inspired by the idea of Pictorial Structures, it models local appearance and joint spatial configuration of the human body. However the underlying graph structure is entirely different. The idea is simple: we model the presence and absence of a body part at every possible position, orientation, and scale in an image with a binary random variable. This results into a vast number of random variables, however, we show that approximate inference in this model is efficient. Moreover we can encode the very same appearance and spatial structure as in Pictorial Structures models. This approach allows us to combine ideas from segmentation and pose estimation into a single model. The Fields of Parts model can use evidence from the background, include local color information, and it is connected more densely than a kinematic chain structure. On the challenging Leeds Sports Poses dataset we improve over the Pictorial Structures counterpart by 5.5% in terms of Average Precision of Keypoints (APK).

ei ps

website pdf DOI Project Page [BibTex]

website pdf DOI Project Page [BibTex]


Thumb xl thumb thumb2
Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points

Tzionas, D., Srikantha, A., Aponte, P., Gall, J.

In German Conference on Pattern Recognition (GCPR), pages: 1-13, Lecture Notes in Computer Science, Springer, GCPR, September 2014 (inproceedings)

Abstract
Hand motion capture has been an active research topic in recent years, following the success of full-body pose tracking. Despite similarities, hand tracking proves to be more challenging, characterized by a higher dimensionality, severe occlusions and self-similarity between fingers. For this reason, most approaches rely on strong assumptions, like hands in isolation or expensive multi-camera systems, that limit the practical use. In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera. Our approach combines a generative model with collision detection and discriminatively learned salient points. We quantitatively evaluate our approach on 14 new sequences with challenging interactions.

ps

pdf Supplementary pdf Supplementary Material Project Page DOI Project Page [BibTex]

pdf Supplementary pdf Supplementary Material Project Page DOI Project Page [BibTex]


Thumb xl opendr
OpenDR: An Approximate Differentiable Renderer

Loper, M. M., Black, M. J.

In Computer Vision – ECCV 2014, 8695, pages: 154-169, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Inverse graphics attempts to take sensor data and infer 3D geometry, illumination, materials, and motions such that a graphics renderer could realistically reproduce the observed scene. Renderers, however, are designed to solve the forward process of image synthesis. To go in the other direction, we propose an approximate di fferentiable renderer (DR) that explicitly models the relationship between changes in model parameters and image observations. We describe a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them. Built on a new autodiff erentiation package and OpenGL, OpenDR provides a local optimization method that can be incorporated into probabilistic programming frameworks. We demonstrate the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data.

ps

pdf Code Chumpy Supplementary video of talk DOI Project Page [BibTex]

pdf Code Chumpy Supplementary video of talk DOI Project Page [BibTex]


Thumb xl teaser 200 10
Discovering Object Classes from Activities

Srikantha, A., Gall, J.

In European Conference on Computer Vision, 8694, pages: 415-430, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
In order to avoid an expensive manual labeling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visual similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in these videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.

ps

pdf anno poster DOI Project Page [BibTex]

pdf anno poster DOI Project Page [BibTex]


Thumb xl ps page panel
Probabilistic Progress Bars

Kiefel, M., Schuler, C., Hennig, P.

In Conference on Pattern Recognition (GCPR), 8753, pages: 331-341, Lecture Notes in Computer Science, (Editors: Jiang, X., Hornegger, J., and Koch, R.), Springer, GCPR, September 2014 (inproceedings)

Abstract
Predicting the time at which the integral over a stochastic process reaches a target level is a value of interest in many applications. Often, such computations have to be made at low cost, in real time. As an intuitive example that captures many features of this problem class, we choose progress bars, a ubiquitous element of computer user interfaces. These predictors are usually based on simple point estimators, with no error modelling. This leads to fluctuating behaviour confusing to the user. It also does not provide a distribution prediction (risk values), which are crucial for many other application areas. We construct and empirically evaluate a fast, constant cost algorithm using a Gauss-Markov process model which provides more information to the user.

ei ps pn

website+code pdf DOI [BibTex]

website+code pdf DOI [BibTex]