Header logo is


2019


no image
On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset

Gondal, M. W., Wuthrich, M., Miladinovic, D., Locatello, F., Breidt, M., Volchkov, V., Akpo, J., Bachem, O., Schölkopf, B., Bauer, S.

Advances in Neural Information Processing Systems 32, pages: 15714-15725, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

am ei sf

link (url) [BibTex]

2019


link (url) [BibTex]


Attacking Optical Flow
Attacking Optical Flow

Ranjan, A., Janai, J., Geiger, A., Black, M. J.

In Proceedings International Conference on Computer Vision (ICCV), pages: 2404-2413, IEEE, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), November 2019, ISSN: 2380-7504 (inproceedings)

Abstract
Deep neural nets achieve state-of-the-art performance on the problem of optical flow estimation. Since optical flow is used in several safety-critical applications like self-driving cars, it is important to gain insights into the robustness of those techniques. Recently, it has been shown that adversarial attacks easily fool deep neural networks to misclassify objects. The robustness of optical flow networks to adversarial attacks, however, has not been studied so far. In this paper, we extend adversarial patch attacks to optical flow networks and show that such attacks can compromise their performance. We show that corrupting a small patch of less than 1% of the image size can significantly affect optical flow estimates. Our attacks lead to noisy flow estimates that extend significantly beyond the region of the attack, in many cases even completely erasing the motion of objects in the scene. While networks using an encoder-decoder architecture are very sensitive to these attacks, we found that networks using a spatial pyramid architecture are less affected. We analyse the success and failure of attacking both architectures by visualizing their feature maps and comparing them to classical optical flow techniques which are robust to these attacks. We also demonstrate that such attacks are practical by placing a printed pattern into real scenes.

avg ps

Video Project Page Paper Supplementary Material link (url) DOI [BibTex]

Video Project Page Paper Supplementary Material link (url) DOI [BibTex]


Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles
Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles

Saini, N., Price, E., Tallamraju, R., Enficiaud, R., Ludwig, R., Martinović, I., Ahmad, A., Black, M.

Proceedings 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages: 823-832, IEEE, International Conference on Computer Vision (ICCV), October 2019 (conference)

Abstract
Capturing human motion in natural scenarios means moving motion capture out of the lab and into the wild. Typical approaches rely on fixed, calibrated, cameras and reflective markers on the body, significantly limiting the motions that can be captured. To make motion capture truly unconstrained, we describe the first fully autonomous outdoor capture system based on flying vehicles. We use multiple micro-aerial-vehicles(MAVs), each equipped with a monocular RGB camera, an IMU, and a GPS receiver module. These detect the person, optimize their position, and localize themselves approximately. We then develop a markerless motion capture method that is suitable for this challenging scenario with a distant subject, viewed from above, with approximately calibrated and moving cameras. We combine multiple state-of-the-art 2D joint detectors with a 3D human body model and a powerful prior on human pose. We jointly optimize for 3D body pose and camera pose to robustly fit the 2D measurements. To our knowledge, this is the first successful demonstration of outdoor, full-body, markerless motion capture from autonomous flying vehicles.

ps

Code Data Video Paper Manuscript DOI Project Page [BibTex]

Code Data Video Paper Manuscript DOI Project Page [BibTex]


Resolving {3D} Human Pose Ambiguities with {3D} Scene Constraints
Resolving 3D Human Pose Ambiguities with 3D Scene Constraints

Hassan, M., Choutas, V., Tzionas, D., Black, M. J.

In Proceedings International Conference on Computer Vision, pages: 2282-2292, IEEE, International Conference on Computer Vision, October 2019 (inproceedings)

Abstract
To understand and analyze human behavior, we need to capture humans moving in, and interacting with, the world. Most existing methods perform 3D human pose estimation without explicitly considering the scene. We observe however that the world constrains the body and vice-versa. To motivate this, we show that current 3D human pose estimation methods produce results that are not consistent with the 3D scene. Our key contribution is to exploit static 3D scene structure to better estimate human pose from monocular images. The method enforces Proximal Relationships with Object eXclusion and is called PROX. To test this, we collect a new dataset composed of 12 different 3D scenes and RGB sequences of 20 subjects moving in and interacting with the scenes. We represent human pose using the 3D human body model SMPL-X and extend SMPLify-X to estimate body pose using scene constraints. We make use of the 3D scene information by formulating two main constraints. The interpenetration constraint penalizes intersection between the body model and the surrounding 3D scene. The contact constraint encourages specific parts of the body to be in contact with scene surfaces if they are close enough in distance and orientation. For quantitative evaluation we capture a separate dataset with 180 RGB frames in which the ground-truth body pose is estimated using a motion-capture system. We show quantitatively that introducing scene constraints significantly reduces 3D joint error and vertex error. Our code and data are available for research at https://prox.is.tue.mpg.de.

ps

pdf poster link (url) DOI [BibTex]

pdf poster link (url) DOI [BibTex]


Learning to Reconstruct {3D} Human Pose and Shape via Model-fitting in the Loop
Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop

Kolotouros, N., Pavlakos, G., Black, M. J., Daniilidis, K.

Proceedings International Conference on Computer Vision (ICCV), pages: 2252-2261, IEEE, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), October 2019, ISSN: 2380-7504 (conference)

Abstract
Model-based human pose estimation is currently approached through two different paradigms. Optimization-based methods fit a parametric body model to 2D observations in an iterative manner, leading to accurate image-model alignments, but are often slow and sensitive to the initialization. In contrast, regression-based methods, that use a deep network to directly estimate the model parameters from pixels, tend to provide reasonable, but not pixel accurate, results while requiring huge amounts of supervision. In this work, instead of investigating which approach is better, our key insight is that the two paradigms can form a strong collaboration. A reasonable, directly regressed estimate from the network can initialize the iterative optimization making the fitting faster and more accurate. Similarly, a pixel accurate fit from iterative optimization can act as strong supervision for the network. This is the core of our proposed approach SPIN (SMPL oPtimization IN the loop). The deep network initializes an iterative optimization routine that fits the body model to 2D joints within the training loop, and the fitted estimate is subsequently used to supervise the network. Our approach is self-improving by nature, since better network estimates can lead the optimization to better solutions, while more accurate optimization fits provide better supervision for the network. We demonstrate the effectiveness of our approach in different settings, where 3D ground truth is scarce, or not available, and we consistently outperform the state-of-the-art model-based pose estimation approaches by significant margins.

ps

pdf code project DOI [BibTex]

pdf code project DOI [BibTex]


Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild"
Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild"

Zuffi, S., Kanazawa, A., Berger-Wolf, T., Black, M. J.

In International Conference on Computer Vision, pages: 5358-5367, IEEE, International Conference on Computer Vision, October 2019 (inproceedings)

Abstract
We present the first method to perform automatic 3D pose, shape and texture capture of animals from images acquired in-the-wild. In particular, we focus on the problem of capturing 3D information about Grevy's zebras from a collection of images. The Grevy's zebra is one of the most endangered species in Africa, with only a few thousand individuals left. Capturing the shape and pose of these animals can provide biologists and conservationists with information about animal health and behavior. In contrast to research on human pose, shape and texture estimation, training data for endangered species is limited, the animals are in complex natural scenes with occlusion, they are naturally camouflaged, travel in herds, and look similar to each other. To overcome these challenges, we integrate the recent SMAL animal model into a network-based regression pipeline, which we train end-to-end on synthetically generated images with pose, shape, and background variation. Going beyond state-of-the-art methods for human shape and pose estimation, our method learns a shape space for zebras during training. Learning such a shape space from images using only a photometric loss is novel, and the approach can be used to learn shape in other settings with limited 3D supervision. Moreover, we couple 3D pose and shape prediction with the task of texture synthesis, obtaining a full texture map of the animal from a single image. We show that the predicted texture map allows a novel per-instance unsupervised optimization over the network features. This method, SMALST (SMAL with learned Shape and Texture) goes beyond previous work, which assumed manual keypoints and/or segmentation, to regress directly from pixels to 3D animal shape, pose and texture. Code and data are available at https://github.com/silviazuffi/smalst

ps

code pdf supmat iccv19 presentation DOI Project Page [BibTex]

code pdf supmat iccv19 presentation DOI Project Page [BibTex]


no image
Energy Conscious Over-actuated Multi-Agent Payload Transport Robot: Simulations and Preliminary Physical Validation

Tallamraju, R., Verma, P., Sripada, V., Agrawal, S., Karlapalem, K.

28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pages: 1-7, IEEE, 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), October 2019 (conference)

ps

DOI [BibTex]

DOI [BibTex]


Efficient Learning on Point Clouds With Basis Point Sets
Efficient Learning on Point Clouds With Basis Point Sets

Prokudin, S., Lassner, C., Romero, J.

International Conference on Computer Vision, pages: 4332-4341, October 2019 (conference)

Abstract
With an increased availability of 3D scanning technology, point clouds are moving into the focus of computer vision as a rich representation of everyday scenes. However, they are hard to handle for machine learning algorithms due to the unordered structure. One common approach is to apply voxelization, which dramatically increases the amount of data stored and at the same time loses details through discretization. Recently, deep learning models with hand-tailored architectures were proposed to handle point clouds directly and achieve input permutation invariance. However, these architectures use an increased number of parameters and are computationally inefficient. In this work we propose basis point sets as a highly efficient and fully general way to process point clouds with machine learning algorithms. Basis point sets are a residual representation that can be computed efficiently and can be used with standard neural network architectures. Using the proposed representation as the input to a relatively simple network allows us to match the performance of PointNet on a shape classification task while using three order of magnitudes less floating point operations. In a second experiment, we show how proposed representation can be used for obtaining high resolution meshes from noisy 3D scans. Here, our network achieves performance comparable to the state-of-the-art computationally intense multi-step frameworks, in one network pass that can be done in less than 1ms.

ps

code pdf [BibTex]

code pdf [BibTex]


End-to-end Learning for Graph Decomposition
End-to-end Learning for Graph Decomposition

Song, J., Andres, B., Black, M., Hilliges, O., Tang, S.

In International Conference on Computer Vision, pages: 10093-10102, October 2019 (inproceedings)

Abstract
Deep neural networks provide powerful tools for pattern recognition, while classical graph algorithms are widely used to solve combinatorial problems. In computer vision, many tasks combine elements of both pattern recognition and graph reasoning. In this paper, we study how to connect deep networks with graph decomposition into an end-to-end trainable framework. More specifically, the minimum cost multicut problem is first converted to an unconstrained binary cubic formulation where cycle consistency constraints are incorporated into the objective function. The new optimization problem can be viewed as a Conditional Random Field (CRF) in which the random variables are associated with the binary edge labels. Cycle constraints are introduced into the CRF as high-order potentials. A standard Convolutional Neural Network (CNN) provides the front-end features for the fully differentiable CRF. The parameters of both parts are optimized in an end-to-end manner. The efficacy of the proposed learning algorithm is demonstrated via experiments on clustering MNIST images and on the challenging task of real-world multi-people pose estimation.

ps

PDF [BibTex]

PDF [BibTex]


{AMASS}: Archive of Motion Capture as Surface Shapes
AMASS: Archive of Motion Capture as Surface Shapes

Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G., Black, M. J.

Proceedings International Conference on Computer Vision, pages: 5442-5451, IEEE, International Conference on Computer Vision (ICCV), October 2019 (conference)

Abstract
Large datasets are the cornerstone of recent advances in computer vision using deep learning. In contrast, existing human motion capture (mocap) datasets are small and the motions limited, hampering progress on learning models of human motion. While there are many different datasets available, they each use a different parameterization of the body, making it difficult to integrate them into a single meta dataset. To address this, we introduce AMASS, a large and varied database of human motion that unifies 15 different optical marker-based mocap datasets by representing them within a common framework and parameterization. We achieve this using a new method, MoSh++, that converts mocap data into realistic 3D human meshes represented by a rigged body model. Here we use SMPL [26], which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh. The method works for arbitrary marker-sets, while recovering soft-tissue dynamics and realistic hand motion. We evaluate MoSh++ and tune its hyper-parameters using a new dataset of 4D body scans that are jointly recorded with marker-based mocap. The consistent representation of AMASS makes it readily useful for animation, visualization, and generating training data for deep learning. Our dataset is significantly richer than previous human motion collections, having more than 40 hours of motion data, spanning over 300 subjects, more than 11000 motions, and is available for research at https://amass.is.tue.mpg.de/.

ps

code pdf suppl arxiv project website video poster AMASS_Poster DOI [BibTex]

code pdf suppl arxiv project website video poster AMASS_Poster DOI [BibTex]


The Influence of Visual Perspective on Body Size Estimation in Immersive Virtual Reality
The Influence of Visual Perspective on Body Size Estimation in Immersive Virtual Reality

Thaler, A., Pujades, S., Stefanucci, J. K., Creem-Regehr, S. H., Tesch, J., Black, M. J., Mohler, B. J.

In ACM Symposium on Applied Perception, pages: 1-12, ACM, SAP '19: ACM Symposium on Applied Perception 2019, September 2019 (inproceedings)

Abstract
The creation of realistic self-avatars that users identify with is important for many virtual reality applications. However, current approaches for creating biometrically plausible avatars that represent a particular individual require expertise and are time-consuming. We investigated the visual perception of an avatar’s body dimensions by asking males and females to estimate their own body weight and shape on a virtual body using a virtual reality avatar creation tool. In a method of adjustment task, the virtual body was presented in an HTC Vive head-mounted display either co-located with (first-person perspective) or facing (third-person perspective) the participants. Participants adjusted the body weight and dimensions of various body parts to match their own body shape and size. Both males and females underestimated their weight by 10-20% in the virtual body, but the estimates of the other body dimensions were relatively accurate and within a range of ±6%. There was a stronger influence of visual perspective on the estimates for males, but this effect was dependent on the amount of control over the shape of the virtual body, indicating that the results might be caused by where in the body the weight changes expressed themselves. These results suggest that this avatar creation tool could be used to allow participants to make a relatively accurate self-avatar in terms of adjusting body part dimensions, but not weight, and that the influence of visual perspective and amount of control needed over the body shape are likely gender-specific.

ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Learning to Train with Synthetic Humans
Learning to Train with Synthetic Humans

Hoffmann, D. T., Tzionas, D., Black, M. J., Tang, S.

In German Conference on Pattern Recognition (GCPR), pages: 609-623, Springer International Publishing, September 2019 (inproceedings)

Abstract
Neural networks need big annotated datasets for training. However, manual annotation can be too expensive or even unfeasible for certain tasks, like multi-person 2D pose estimation with severe occlusions. A remedy for this is synthetic data with perfect ground truth. Here we explore two variations of synthetic data for this challenging problem; a dataset with purely synthetic humans, as well as a real dataset augmented with synthetic humans. We then study which approach better generalizes to real data, as well as the influence of virtual humans in the training loss. We observe that not all synthetic samples are equally informative for training, while the informative samples are different for each training stage. To exploit this observation, we employ an adversarial student-teacher framework; the teacher improves the student by providing the hardest samples for its current state as a challenge. Experiments show that this student-teacher framework outperforms all our baselines.

ps

pdf suppl poster link (url) DOI Project Page [BibTex]

pdf suppl poster link (url) DOI Project Page [BibTex]


How do people learn how to plan?
How do people learn how to plan?

Jain, Y. R., Gupta, S., Rakesh, V., Dayan, P., Callaway, F., Lieder, F.

Conference on Cognitive Computational Neuroscience, September 2019 (conference)

Abstract
How does the brain learn how to plan? We reverse-engineer people's underlying learning mechanisms by combining rational process models of cognitive plasticity with recently developed empirical methods that allow us to trace the temporal evolution of people's planning strategies. We find that our Learned Value of Computation model (LVOC) accurately captures people's average learning curve. However, there were also substantial individual differences in metacognitive learning that are best understood in terms of multiple different learning mechanisms-including strategy selection learning. Furthermore, we observed that LVOC could not fully capture people's ability to adaptively decide when to stop planning. We successfully extended the LVOC model to address these discrepancies. Our models broadly capture people's ability to improve their decision mechanisms and represent a significant step towards reverse-engineering how the brain learns increasingly effective cognitive strategies through its interaction with the environment.

re

How do people learn to plan? How do people learn to plan? [BibTex]

How do people learn to plan? How do people learn to plan? [BibTex]


no image
Testing Computational Models of Goal Pursuit

Mohnert, F., Tosic, M., Lieder, F.

CCN2019, September 2019 (conference)

Abstract
Goals are essential to human cognition and behavior. But how do we pursue them? To address this question, we model how capacity limits on planning and attention shape the computational mechanisms of human goal pursuit. We test the predictions of a simple model based on previous theories in a behavioral experiment. The results show that to fully capture how people pursue their goals it is critical to account for people’s limited attention in addition to their limited planning. Our findings elucidate the cognitive constraints that shape human goal pursuit and point to an improved model of human goal pursuit that can reliably predict which goals a person will achieve and which goals they will struggle to pursue effectively.

re

link (url) DOI Project Page [BibTex]


Motion Planning for Multi-Mobile-Manipulator Payload Transport Systems
Motion Planning for Multi-Mobile-Manipulator Payload Transport Systems

Tallamraju, R., Salunkhe, D., Rajappa, S., Ahmad, A., Karlapalem, K., Shah, S. V.

In 15th IEEE International Conference on Automation Science and Engineering, pages: 1469-1474, IEEE, 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE), August 2019, ISSN: 2161-8089 (inproceedings)

ps

DOI [BibTex]

DOI [BibTex]


Soft Continuous Surface for Micromanipulation driven by Light-controlled Hydrogels
Soft Continuous Surface for Micromanipulation driven by Light-controlled Hydrogels

Choi, E., Jeong, H., Qiu, T., Fischer, P., Palagi, S.

4th IEEE International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), July 2019 (conference)

Abstract
Remotely controlled, automated actuation and manipulation at the microscale is essential for a number of micro-manufacturing, biology, and lab-on-a-chip applications. To transport and manipulate micro-objects, arrays of remotely controlled micro-actuators are required, which, in turn, typically require complex and expensive solid-state chips. Here, we show that a continuous surface can function as a highly parallel, many-degree of freedom, wirelessly-controlled microactuator with seamless deformation. The soft continuous surface is based on a hydrogel that undergoes a volume change in response to applied light. The fabrication of the hydrogels and the characterization of their optical and thermomechanical behaviors are reported. The temperature-dependent localized deformation of the hydrogel is also investigated by numerical simulations. Static and dynamic deformations are obtained in the soft material by projecting light fields at high spatial resolution onto the surface. By controlling such deformations in open loop and especially closed loop, automated photoactuation is achieved. The surface deformations are then exploited to examine how inert microbeads can be manipulated autonomously on the surface. We believe that the proposed approach suggests ways to implement universal 2D micromanipulation schemes that can be useful for automation in microfabrication and lab-on-a-chip applications.

pf

[BibTex]

[BibTex]


no image
Measuring How People Learn How to Plan

Jain, Y. R., Callaway, F., Lieder, F.

Proceedings 41st Annual Meeting of the Cognitive Science Society, pages: 1956-1962, CogSci2019, 41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

Abstract
The human mind has an unparalleled ability to acquire complex cognitive skills, discover new strategies, and refine its ways of thinking and decision-making; these phenomena are collectively known as cognitive plasticity. One important manifestation of cognitive plasticity is learning to make better–more far-sighted–decisions via planning. A serious obstacle to studying how people learn how to plan is that cognitive plasticity is even more difficult to observe than cognitive strategies are. To address this problem, we develop a computational microscope for measuring cognitive plasticity and validate it on simulated and empirical data. Our approach employs a process tracing paradigm recording signatures of human planning and how they change over time. We then invert a generative model of the recorded changes to infer the underlying cognitive plasticity. Our computational microscope measures cognitive plasticity significantly more accurately than simpler approaches, and it correctly detected the effect of an external manipulation known to promote cognitive plasticity. We illustrate how computational microscopes can be used to gain new insights into the time course of metacognitive learning and to test theories of cognitive development and hypotheses about the nature of cognitive plasticity. Future work will leverage our computational microscope to reverse-engineer the learning mechanisms enabling people to acquire complex cognitive skills such as planning and problem solving.

re

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Soft Phantom for the Training of Renal Calculi Diagnostics and  Lithotripsy
Soft Phantom for the Training of Renal Calculi Diagnostics and Lithotripsy

Li., D., Suarez-Ibarrola, R., Choi, E., Jeong, M., Gratzke, C., Miernik, A., Fischer, P., Qiu, T.

41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), July 2019 (conference)

Abstract
Organ models are important for medical training and surgical planning. With the fast development of additive fabrication technologies, including 3D printing, the fabrication of 3D organ phantoms with precise anatomical features becomes possible. Here, we develop the first high-resolution kidney phantom based on soft material assembly, by combining 3D printing and polymer molding techniques. The phantom exhibits both the detailed anatomy of a human kidney and the elasticity of soft tissues. The phantom assembly can be separated into two parts on the coronal plane, thus large renal calculi are readily placed at any desired location of the calyx. With our sealing method, the assembled phantom withstands a hydraulic pressure that is four times the normal intrarenal pressure, thus it allows the simulation of medical procedures under realistic pressure conditions. The medical diagnostics of the renal calculi is performed by multiple imaging modalities, including X-ray, ultrasound imaging and endoscopy. The endoscopic lithotripsy is also successfully performed on the phantom. The use of a multifunctional soft phantom assembly thus shows great promise for the simulation of minimally invasive medical procedures under realistic conditions.

pf

[BibTex]

[BibTex]


A Magnetic Actuation System for the  Active Microrheology in Soft Biomaterials
A Magnetic Actuation System for the Active Microrheology in Soft Biomaterials

Jeong, M., Choi, E., Li., D., Palagi, S., Fischer, P., Qiu, T.

4th IEEE International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), July 2019 (conference)

Abstract
Microrheology is a key technique to characterize soft materials at small scales. The microprobe is wirelessly actuated and therefore typically only low forces or torques can be applied, which limits the range of the applied strain. Here, we report a new magnetic actuation system for microrheology consisting of an array of rotating permanent magnets, which achieves a rotating magnetic field with a spatially homogeneous high field strength of ~100 mT in a working volume of ~20×20×20 mm3. Compared to a traditional electromagnetic coil system, the permanent magnet assembly is portable and does not require cooling, and it exerts a large magnetic torque on the microprobe that is an order of magnitude higher than previous setups. Experimental results demonstrate that the measurement range of the soft gels’ elasticity covers at least five orders of magnitude. With the large actuation torque, it is also possible to study the fracture mechanics of soft biomaterials at small scales.

pf

[BibTex]

[BibTex]


no image
Extending Rationality

Pothos, E. M., Busemeyer, J. R., Pleskac, T., Yearsley, J. M., Tenenbaum, J. B., Goodman, N. D., Tessler, M. H., Griffiths, T. L., Lieder, F., Hertwig, R., Pachur, T., Leuker, C., Shiffrin, R. M.

Proceedings of the 41st Annual Conference of the Cognitive Science Society, pages: 39-40, CogSci 2019, July 2019 (conference)

re

Proceedings of the 41st Annual Conference of the Cognitive Science Society [BibTex]

Proceedings of the 41st Annual Conference of the Cognitive Science Society [BibTex]


How should we incentivize learning? An optimal feedback mechanism for educational games and online courses
How should we incentivize learning? An optimal feedback mechanism for educational games and online courses

Xu, L., Wirzberger, M., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

Abstract
Online courses offer much-needed opportunities for lifelong self-directed learning, but people rarely follow through on their noble intentions to complete them. To increase student retention educational software often uses game elements to motivate students to engage in and persist in learning activities. However, gamification only works when it is done properly, and there is currently no principled method that educational software could use to achieve this. We develop a principled feedback mechanism for encouraging good study choices and persistence in self-directed learning environments. Rather than giving performance feedback, our method rewards the learner's efforts with optimal brain points that convey the value of practice. To derive these optimal brain points, we applied the theory of optimal gamification to a mathematical model of skill acquisition. In contrast to hand-designed incentive structures, optimal brain points are constructed in such a way that the incentive system cannot be gamed. Evaluating our method in a behavioral experiment, we find that optimal brain points significantly increased the proportion of participants who instead of exploiting an inefficient skill they already knew-attempted to learn a difficult but more efficient skill, persisted through failure, and succeeded to master the new skill. Our method provides a principled approach to designing incentive structures and feedback mechanisms for educational games and online courses. We are optimistic that optimal brain points will prove useful for increasing student retention and helping people overcome the motivational obstacles that stand in the way of self-directed lifelong learning.

re

link (url) Project Page [BibTex]


no image
What’s in the Adaptive Toolbox and How Do People Choose From It? Rational Models of Strategy Selection in Risky Choice

Mohnert, F., Pachur, T., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

Abstract
Although process data indicates that people often rely on various (often heuristic) strategies to choose between risky options, our models of heuristics cannot predict people's choices very accurately. To address this challenge, it has been proposed that people adaptively choose from a toolbox of simple strategies. But which strategies are contained in this toolbox? And how do people decide when to use which decision strategy? Here, we develop a model according to which each person selects decisions strategies rationally from their personal toolbox; our model allows one to infer which strategies are contained in the cognitive toolbox of an individual decision-maker and specifies when she will use which strategy. Using cross-validation on an empirical data set, we find that this rational model of strategy selection from a personal adaptive toolbox predicts people's choices better than any single strategy (even when it is allowed to vary across participants) and better than previously proposed toolbox models. Our model comparisons show that both inferring the toolbox and rational strategy selection are critical for accurately predicting people's risky choices. Furthermore, our model-based data analysis reveals considerable individual differences in the set of strategies people are equipped with and how they choose among them; these individual differences could partly explain why some people make better choices than others. These findings represent an important step towards a complete formalization of the notion that people select their cognitive strategies from a personal adaptive toolbox.

re

link (url) [BibTex]


no image
Measuring How People Learn How to Plan

Jain, Y. R., Callaway, F., Lieder, F.

pages: 357-361, RLDM 2019, July 2019 (conference)

Abstract
The human mind has an unparalleled ability to acquire complex cognitive skills, discover new strategies, and refine its ways of thinking and decision-making; these phenomena are collectively known as cognitive plasticity. One important manifestation of cognitive plasticity is learning to make better – more far-sighted – decisions via planning. A serious obstacle to studying how people learn how to plan is that cognitive plasticity is even more difficult to observe than cognitive strategies are. To address this problem, we develop a computational microscope for measuring cognitive plasticity and validate it on simulated and empirical data. Our approach employs a process tracing paradigm recording signatures of human planning and how they change over time. We then invert a generative model of the recorded changes to infer the underlying cognitive plasticity. Our computational microscope measures cognitive plasticity significantly more accurately than simpler approaches, and it correctly detected the effect of an external manipulation known to promote cognitive plasticity. We illustrate how computational microscopes can be used to gain new insights into the time course of metacognitive learning and to test theories of cognitive development and hypotheses about the nature of cognitive plasticity. Future work will leverage our computational microscope to reverse-engineer the learning mechanisms enabling people to acquire complex cognitive skills such as planning and problem solving.

re

link (url) [BibTex]

link (url) [BibTex]


no image
A Cognitive Tutor for Helping People Overcome Present Bias

Lieder, F., Callaway, F., Jain, Y. R., Krueger, P. M., Das, P., Gul, S., Griffiths, T. L.

RLDM 2019, July 2019, Falk Lieder and Frederick Callaway contributed equally to this publication. (conference)

Abstract
People's reliance on suboptimal heuristics gives rise to a plethora of cognitive biases in decision-making including the present bias, which denotes people's tendency to be overly swayed by an action's immediate costs/benefits rather than its more important long-term consequences. One approach to helping people overcome such biases is to teach them better decision strategies. But which strategies should we teach them? And how can we teach them effectively? Here, we leverage an automatic method for discovering rational heuristics and insights into how people acquire cognitive skills to develop an intelligent tutor that teaches people how to make better decisions. As a proof of concept, we derive the optimal planning strategy for a simple model of situations where people fall prey to the present bias. Our cognitive tutor teaches people this optimal planning strategy by giving them metacognitive feedback on how they plan in a 3-step sequential decision-making task. Our tutor's feedback is designed to maximally accelerate people's metacognitive reinforcement learning towards the optimal planning strategy. A series of four experiments confirmed that training with the cognitive tutor significantly reduced present bias and improved people's decision-making competency: Experiment 1 demonstrated that the cognitive tutor's feedback can help participants discover far-sighted planning strategies. Experiment 2 found that this training effect transfers to more complex environments. Experiment 3 found that these transfer effects are retained for at least 24 hours after the training. Finally, Experiment 4 found that practicing with the cognitive tutor can have additional benefits over being told the strategy in words. The results suggest that promoting metacognitive reinforcement learning with optimal feedback is a promising approach to improving the human mind.

re

DOI [BibTex]

DOI [BibTex]


Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

Ranjan, A., Jampani, V., Balles, L., Kim, K., Sun, D., Wulff, J., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 12240-12249, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems.

ps

Paper link (url) Project Page Project Page [BibTex]

Paper link (url) Project Page Project Page [BibTex]


Local Temporal Bilinear Pooling for Fine-grained Action Parsing
Local Temporal Bilinear Pooling for Fine-grained Action Parsing

Zhang, Y., Tang, S., Muandet, K., Jarvers, C., Neumann, H.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Fine-grained temporal action parsing is important in many applications, such as daily activity understanding, human motion analysis, surgical robotics and others requiring subtle and precise operations in a long-term period. In this paper we propose a novel bilinear pooling operation, which is used in intermediate layers of a temporal convolutional encoder-decoder net. In contrast to other work, our proposed bilinear pooling is learnable and hence can capture more complex local statistics than the conventional counterpart. In addition, we introduce exact lower-dimension representations of our bilinear forms, so that the dimensionality is reduced with neither information loss nor extra computation. We perform intensive experiments to quantitatively analyze our model and show the superior performances to other state-of-the-art work on various datasets.

ei ps

Code video demo pdf link (url) [BibTex]

Code video demo pdf link (url) [BibTex]


Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Sanyal, S., Bolkart, T., Feng, H., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 7763-7772, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
The estimation of 3D face shape from a single image must be robust to variations in lighting, head pose, expression, facial hair, makeup, and occlusions. Robustness requires a large training set of in-the-wild images, which by construction, lack ground truth 3D shape. To train a network without any 2D-to-3D supervision, we present RingNet, which learns to compute 3D face shape from a single image. Our key observation is that an individual’s face shape is constant across images, regardless of expression, pose, lighting, etc. RingNet leverages multiple images of a person and automatically detected 2D face features. It uses a novel loss that encourages the face shape to be similar when the identity is the same and different for different people. We achieve invariance to expression by representing the face using the FLAME model. Once trained, our method takes a single image and outputs the parameters of FLAME, which can be readily animated. Additionally we create a new database of faces “not quite in-the-wild” (NoW) with 3D head scans and high-resolution images of the subjects in a wide variety of conditions. We evaluate publicly available methods and find that RingNet is more accurate than methods that use 3D supervision. The dataset, model, and results are available for research purposes.

ps

code pdf preprint link (url) Project Page [BibTex]

code pdf preprint link (url) Project Page [BibTex]


Learning Joint Reconstruction of Hands and Manipulated Objects
Learning Joint Reconstruction of Hands and Manipulated Objects

Hasson, Y., Varol, G., Tzionas, D., Kalevatykh, I., Black, M. J., Laptev, I., Schmid, C.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 11807-11816, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Estimating hand-object manipulations is essential for interpreting and imitating human actions. Previous work has made significant progress towards reconstruction of hand poses and object shapes in isolation. Yet, reconstructing hands and objects during manipulation is a more challenging task due to significant occlusions of both the hand and object. While presenting challenges, manipulations may also simplify the problem since the physics of contact restricts the space of valid hand-object configurations. For example, during manipulation, the hand and object should be in contact but not interpenetrate. In this work, we regularize the joint reconstruction of hands and objects with manipulation constraints. We present an end-to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and evaluate the model, we also propose a new large-scale synthetic dataset, ObMan, with hand-object manipulations. We demonstrate the transferability of ObMan-trained models to real data.

ps

pdf suppl poster link (url) DOI Project Page Project Page [BibTex]

pdf suppl poster link (url) DOI Project Page Project Page [BibTex]


no image
Introducing the Decision Advisor: A simple online tool that helps people overcome cognitive biases and experience less regret in real-life decisions

lawama, G., Greenberg, S., Moore, D., Lieder, F.

40th Annual Meeting of the Society for Judgement and Decision Making, June 2019 (conference)

Abstract
Cognitive biases shape many decisions people come to regret. To help people overcome these biases, Clear-erThinking.org developed a free online tool, called the Decision Advisor (https://programs.clearerthinking.org/decisionmaker.html). The Decision Advisor assists people in big real-life decisions by prompting them to generate more alternatives, guiding them to evaluate their alternatives according to principles of decision analysis, and educates them about pertinent biases while they are making their decision. In a within-subjects experiment, 99 participants reported significantly fewer biases and less regret for a decision supported by the Decision Advisor than for a previous unassisted decision.

re

DOI [BibTex]

DOI [BibTex]


Expressive Body Capture: 3D Hands, Face, and Body from a Single Image
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A. A., Tzionas, D., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 10975-10985, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
To facilitate the analysis of human actions, interactions and emotions, we compute a 3D model of human body pose, hand pose, and facial expression from a single monocular image. To achieve this, we use thousands of 3D scans to train a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with fully articulated hands and an expressive face. Learning to regress the parameters of SMPL-X directly from images is challenging without paired images and 3D ground truth. Consequently, we follow the approach of SMPLify, which estimates 2D features and then optimizes model parameters to fit the features. We improve on SMPLify in several significant ways: (1) we detect 2D features corresponding to the face, hands, and feet and fit the full SMPL-X model to these; (2) we train a new neural network pose prior using a large MoCap dataset; (3) we define a new interpenetration penalty that is both fast and accurate; (4) we automatically detect gender and the appropriate body models (male, female, or neutral); (5) our PyTorch implementation achieves a speedup of more than 8x over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to both controlled images and images in the wild. We evaluate 3D accuracy on a new curated dataset comprising 100 images with pseudo ground-truth. This is a step towards automatic expressive human capture from monocular RGB data. The models, code, and data are available for research purposes at https://smpl-x.is.tue.mpg.de.

ps

video code pdf suppl poster link (url) DOI Project Page [BibTex]

video code pdf suppl poster link (url) DOI Project Page [BibTex]


no image
The Goal Characteristics (GC) questionannaire: A comprehensive measure for goals’ content, attainability, interestingness, and usefulness

Iwama, G., Wirzberger, M., Lieder, F.

40th Annual Meeting of the Society for Judgement and Decision Making, June 2019 (conference)

Abstract
Many studies have investigated how goal characteristics affect goal achievement. However, most of them considered only a small number of characteristics and the psychometric properties of their measures remains unclear. To overcome these limitations, we developed and validated a comprehensive questionnaire of goal characteristics with four subscales - measuring the goal’s content, attainability, interestingness, and usefulness respectively. 590 participants completed the questionnaire online. A confirmatory factor analysis supported the four subscales and their structure. The GC questionnaire (https://osf.io/qfhup) can be easily applied to investigate goal setting, pursuit and adjustment in a wide range of contexts.

re

DOI [BibTex]


Capture, Learning, and Synthesis of 3D Speaking Styles
Capture, Learning, and Synthesis of 3D Speaking Styles

Cudeiro, D., Bolkart, T., Laidlaw, C., Ranjan, A., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 10101-10111, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input—even speech in languages other than English—and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.

ps

code Project Page video paper [BibTex]

code Project Page video paper [BibTex]


A Magnetically-Actuated Untethered Jellyfish-Inspired Soft Milliswimmer
A Magnetically-Actuated Untethered Jellyfish-Inspired Soft Milliswimmer

(Best Paper Award)

Ziyu Ren, T. W., Hu, W.

RSS 2019: Robotics: Science and Systems Conference, June 2019 (conference)

pi

[BibTex]

[BibTex]


Accurate Vision-based Manipulation through Contact Reasoning
Accurate Vision-based Manipulation through Contact Reasoning

Kloss, A., Bauza, M., Wu, J., Tenenbaum, J. B., Rodriguez, A., Bohg, J.

In International Conference on Robotics and Automation, May 2019 (inproceedings) Accepted

Abstract
Planning contact interactions is one of the core challenges of many robotic tasks. Optimizing contact locations while taking dynamics into account is computationally costly and in only partially observed environments, executing contact-based tasks often suffers from low accuracy. We present an approach that addresses these two challenges for the problem of vision-based manipulation. First, we propose to disentangle contact from motion optimization. Thereby, we improve planning efficiency by focusing computation on promising contact locations. Second, we use a hybrid approach for perception and state estimation that combines neural networks with a physically meaningful state representation. In simulation and real-world experiments on the task of planar pushing, we show that our method is more efficient and achieves a higher manipulation accuracy than previous vision-based approaches.

am

Video link (url) [BibTex]

Video link (url) [BibTex]


Learning Latent Space Dynamics for Tactile Servoing
Learning Latent Space Dynamics for Tactile Servoing

Sutanto, G., Ratliff, N., Sundaralingam, B., Chebotar, Y., Su, Z., Handa, A., Fox, D.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2019, IEEE, International Conference on Robotics and Automation, May 2019 (inproceedings) Accepted

am

pdf video [BibTex]

pdf video [BibTex]


Leveraging Contact Forces for Learning to Grasp
Leveraging Contact Forces for Learning to Grasp

Merzic, H., Bogdanovic, M., Kappler, D., Righetti, L., Bohg, J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2019, IEEE, International Conference on Robotics and Automation, May 2019 (inproceedings)

Abstract
Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used two- fingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.

am mg

video arXiv [BibTex]

video arXiv [BibTex]


no image
Distributed, Collaborative Virtual Reality Application for Product Development with Simple Avatar Calibration Method

Dixken, M., Diers, D., Wingert, B., Hatzipanayioti, A., Mohler, B. J., Riedel, O., Bues, M.

IEEE Conference on Virtual Reality and 3D User Interfaces, (VR), pages: 1299-1300, IEEE, March 2019 (conference)

ps

DOI [BibTex]

DOI [BibTex]


no image
Elastic modulus affects adhesive strength of gecko-inspired synthetics in variable temperature and humidity

Mitchell, CT, Drotlef, D, Dayan, CB, Sitti, M, Stark, AY

In INTEGRATIVE AND COMPARATIVE BIOLOGY, pages: E372-E372, OXFORD UNIV PRESS INC JOURNALS DEPT, 2001 EVANS RD, CARY, NC 27513 USA, March 2019 (inproceedings)

pi

[BibTex]

[BibTex]


Wide Range-Sensitive, Bending-Insensitive Pressure Detection and Application to Wearable Healthcare Device
Wide Range-Sensitive, Bending-Insensitive Pressure Detection and Application to Wearable Healthcare Device

Kim, S., Amjadi, M., Lee, T., Jeong, Y., Kwon, D., Kim, M. S., Kim, K., Kim, T., Oh, Y. S., Park, I.

In 2019 20th International Conference on Solid-State Sensors, Actuators and Microsystems & Eurosensors XXXIII (TRANSDUCERS & EUROSENSORS XXXIII), 2019 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Gecko-inspired composite microfibers for reversible adhesion on smooth and rough surfaces

Drotlef, D., Dayan, C., Sitti, M.

In INTEGRATIVE AND COMPARATIVE BIOLOGY, pages: E58-E58, OXFORD UNIV PRESS INC JOURNALS DEPT, 2001 EVANS RD, CARY, NC 27513 USA, 2019 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Remediating Cognitive Decline with Cognitive Tutors

Das, P., Callaway, F., Griffiths, T. L., Lieder, F.

RLDM 2019, 2019 (conference)

Abstract
As people age, their cognitive abilities tend to deteriorate, including their ability to make complex plans. To remediate this cognitive decline, many commercial brain training programs target basic cognitive capacities, such as working memory. We have recently developed an alternative approach: intelligent tutors that teach people cognitive strategies for making the best possible use of their limited cognitive resources. Here, we apply this approach to improve older adults' planning skills. In a process-tracing experiment we found that the decline in planning performance may be partly because older adults use less effective planning strategies. We also found that, with practice, both older and younger adults learned more effective planning strategies from experience. But despite these gains there was still room for improvement-especially for older people. In a second experiment, we let older and younger adults train their planning skills with an intelligent cognitive tutor that teaches optimal planning strategies via metacognitive feedback. We found that practicing planning with this intelligent tutor allowed older adults to catch up to their younger counterparts. These findings suggest that intelligent tutors that teach clever cognitive strategies can help aging decision-makers stay sharp.

re

DOI [BibTex]

DOI [BibTex]


Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Ghosh, P., Losalka, A., Black, M. J.

In Proc. AAAI, 2019 (inproceedings)

Abstract
Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success till now. Two distinct categories of samples against which deep neural networks are vulnerable, ``adversarial samples" and ``fooling samples", have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can defend against them both under a unified framework. Our model has the form of a variational autoencoder with a Gaussian mixture prior on the latent variable, such that each mixture component corresponds to a single class. We show how selective classification can be performed using this model, thereby causing the adversarial objective to entail a conflict. The proposed method leads to the rejection of adversarial samples instead of misclassification, while maintaining high precision and recall on test data. It also inherently provides a way of learning a selective classifier in a semi-supervised scenario, which can similarly resist adversarial attacks. We further show how one can reclassify the detected adversarial samples by iterative optimization.

ps

link (url) Project Page [BibTex]

2018


Customized Multi-Person Tracker
Customized Multi-Person Tracker

Ma, L., Tang, S., Black, M. J., Van Gool, L.

In Computer Vision – ACCV 2018, Springer International Publishing, Asian Conference on Computer Vision, December 2018 (inproceedings)

ps

PDF Project Page [BibTex]

2018


PDF Project Page [BibTex]


Gait learning for soft microrobots controlled by light fields
Gait learning for soft microrobots controlled by light fields

Rohr, A. V., Trimpe, S., Marco, A., Fischer, P., Palagi, S.

In International Conference on Intelligent Robots and Systems (IROS) 2018, pages: 6199-6206, International Conference on Intelligent Robots and Systems 2018, October 2018 (inproceedings)

Abstract
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing environments. However, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is highly data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semisynthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot’s locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on lightcontrolled soft microrobots and probabilistic learning control.

ics pf

arXiv IEEE Xplore DOI Project Page [BibTex]

arXiv IEEE Xplore DOI Project Page [BibTex]


On the Integration of Optical Flow and Action Recognition
On the Integration of Optical Flow and Action Recognition

Sevilla-Lara, L., Liao, Y., Güney, F., Jampani, V., Geiger, A., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 281-297, Springer, Cham, October 2018 (inproceedings)

Abstract
Most of the top performing action recognition methods use optical flow as a "black box" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities.

avg ps

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation
Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation

Wulff, J., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 567-582, Springer, Cham, October 2018 (inproceedings)

Abstract
The difficulty of annotating training data is a major obstacle to using CNNs for low-level tasks in video. Synthetic data often does not generalize to real videos, while unsupervised methods require heuristic n losses. Proxy tasks can overcome these issues, and start by training a network for a task for which annotation is easier or which can be trained unsupervised. The trained network is then fine-tuned for the original task using small amounts of ground truth data. Here, we investigate frame interpolation as a proxy task for optical flow. Using real movies, we train a CNN unsupervised for temporal interpolation. Such a network implicitly estimates motion, but cannot handle untextured regions. By fi ne-tuning on small amounts of ground truth flow, the network can learn to fill in homogeneous regions and compute full optical flow fi elds. Using this unsupervised pre-training, our network outperforms similar architectures that were trained supervised using synthetic optical flow.

ps

pdf arXiv DOI Project Page [BibTex]

pdf arXiv DOI Project Page [BibTex]


Human Motion Parsing by Hierarchical Dynamic Clustering
Human Motion Parsing by Hierarchical Dynamic Clustering

Zhang, Y., Tang, S., Sun, H., Neumann, H.

In Proceedings of the British Machine Vision Conference (BMVC), pages: 269, BMVA Press, 29th British Machine Vision Conference, September 2018 (inproceedings)

Abstract
Parsing continuous human motion into meaningful segments plays an essential role in various applications. In this work, we propose a hierarchical dynamic clustering framework to derive action clusters from a sequence of local features in an unsuper- vised bottom-up manner. We systematically investigate the modules in this framework and particularly propose diverse temporal pooling schemes, in order to realize accurate temporal action localization. We demonstrate our method on two motion parsing tasks: temporal action segmentation and abnormal behavior detection. The experimental results indicate that the proposed framework is significantly more effective than the other related state-of-the-art methods on several datasets.

ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Generating {3D} Faces using Convolutional Mesh Autoencoders
Generating 3D Faces using Convolutional Mesh Autoencoders

Ranjan, A., Bolkart, T., Sanyal, S., Black, M. J.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11207, pages: 725-741, Springer, Cham, September 2018 (inproceedings)

Abstract
Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.

ps

Code (tensorflow) Code (pytorch) Project Page paper supplementary DOI Project Page Project Page [BibTex]

Code (tensorflow) Code (pytorch) Project Page paper supplementary DOI Project Page Project Page [BibTex]


Part-Aligned Bilinear Representations for Person Re-identification
Part-Aligned Bilinear Representations for Person Re-identification

Suh, Y., Wang, J., Tang, S., Mei, T., Lee, K. M.

In European Conference on Computer Vision (ECCV), 11218, pages: 418-437, Springer, Cham, September 2018 (inproceedings)

Abstract
Comparing the appearance of corresponding body parts is essential for person re-identification. However, body parts are frequently misaligned be- tween detected boxes, due to the detection errors and the pose/viewpoint changes. In this paper, we propose a network that learns a part-aligned representation for person re-identification. Our model consists of a two-stream network, which gen- erates appearance and body part feature maps respectively, and a bilinear-pooling layer that fuses two feature maps to an image descriptor. We show that it results in a compact descriptor, where the inner product between two image descriptors is equivalent to an aggregation of the local appearance similarities of the cor- responding body parts, and thereby significantly reduces the part misalignment problem. Our approach is advantageous over other pose-guided representations by learning part descriptors optimal for person re-identification. Training the net- work does not require any part annotation on the person re-identification dataset. Instead, we simply initialize the part sub-stream using a pre-trained sub-network of an existing pose estimation network and train the whole network to minimize the re-identification loss. We validate the effectiveness of our approach by demon- strating its superiority over the state-of-the-art methods on the standard bench- mark datasets including Market-1501, CUHK03, CUHK01 and DukeMTMC, and standard video dataset MARS.

ps

pdf supplementary DOI Project Page [BibTex]

pdf supplementary DOI Project Page [BibTex]


Learning Human Optical Flow
Learning Human Optical Flow

Ranjan, A., Romero, J., Black, M. J.

In 29th British Machine Vision Conference, September 2018 (inproceedings)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Given this, we devise an optical flow algorithm specifically for human motion and show that it is superior to generic flow methods. Designing a method by hand is impractical, so we develop a new training database of image sequences with ground truth optical flow. For this we use a 3D model of the human body and motion capture data to synthesize realistic flow fields. We then train a convolutional neural network to estimate human flow fields from pairs of images. Since many applications in human motion analysis depend on speed, and we anticipate mobile applications, we base our method on SpyNet with several modifications. We demonstrate that our trained network is more accurate than a wide range of top methods on held-out test data and that it generalizes well to real image sequences. When combined with a person detector/tracker, the approach provides a full solution to the problem of 2D human flow estimation. Both the code and the dataset are available for research.

ps

video code pdf link (url) Project Page Project Page [BibTex]

video code pdf link (url) Project Page Project Page [BibTex]