Header logo is


2018


no image
Counterfactual Mean Embedding: A Kernel Method for Nonparametric Causal Inference

Muandet, K., Kanagawa, M., Saengkyongam, S., Marukata, S.

Workshop on Machine Learning for Causal Inference, Counterfactual Prediction, and Autonomous Action (CausalML) at ICML, July 2018 (conference)

ei

[BibTex]

2018


[BibTex]


no image
Infinite Factorial Finite State Machine for Blind Multiuser Channel Estimation

Ruiz, F. J. R., Valera, I., Svensson, L., Perez-Cruz, F.

IEEE Transactions on Cognitive Communications and Networking, 4(2):177-191, June 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl 2017 frvsr
Frame-Recurrent Video Super-Resolution

Sajjadi, M. S. M., Vemulapalli, R., Brown, M.

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2018 (conference)

ei

ArXiv link (url) [BibTex]

ArXiv link (url) [BibTex]


no image
Learning Face Deblurring Fast and Wide

Jin, M., Hirsch, M., Favaro, P.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages: 745-753, June 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Wasserstein Auto-Encoders

Tolstikhin, I., Bousquet, O., Gelly, S., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Fidelity-Weighted Learning

Dehghani, M., Mehrjou, A., Gouws, S., Kamps, J., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Inducing Probabilistic Context-Free Grammars for the Sequencing of Movement Primitives

Lioutikov, R., Maeda, G., Veiga, F., Kersting, K., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 1-8, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Sobolev GAN

Mroueh, Y., Li*, C., Sercu*, T., Raj*, A., Cheng, Y.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Assisting Movement Training and Execution With Visual and Haptic Feedback

Ewerton, M., Rother, D., Weimar, J., Kollegger, G., Wiemeyer, J., Peters, J., Maeda, G.

Frontiers in Neurorobotics, 12, May 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Temporal Difference Models: Model-Free Deep RL for Model-Based Control

Pong*, V., Gu*, S., Dalal, M., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl andrease teaser 2
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page Project Page [BibTex]

pdf Video Project Page Project Page [BibTex]


no image
Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning

Eysenbach, B., Gu, S., Ibarz, J., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

Videos link (url) Project Page [BibTex]

Videos link (url) Project Page [BibTex]


Thumb xl screenshot 2018 05 18 16 38 40
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

Arxiv, May 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with fully supervised baselines and outperforms data-driven approaches, while requiring less supervision and being significantly faster.

avg

PDF Project Page Project Page [BibTex]


Thumb xl 2018 tgan
Tempered Adversarial Networks

Sajjadi, M. S. M., Parascandolo, G., Mehrjou, A., Schölkopf, B.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Learning Coupled Forward-Inverse Models with Combined Prediction Errors

Koert, D., Maeda, G., Neumann, G., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 2433-2439, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning Disentangled Representations with Wasserstein Auto-Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Automatic Estimation of Modulation Transfer Functions

Bauer, M., Volchkov, V., Hirsch, M., Schölkopf, B.

IEEE International Conference on Computational Photography (ICCP), May 2018 (conference)

ei sf

DOI [BibTex]

DOI [BibTex]


no image
Causal Discovery Using Proxy Variables

Rojas-Carulla, M., Baroni, M., Lopez-Paz, D.

Workshop at 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences

Pinsler, R., Akrour, R., Osa, T., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 596-601, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Group invariance principles for causal generative models

Besserve, M., Shajarisales, N., Schölkopf, B., Janzing, D.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 464-472, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


no image
Mixture of Attractors: A Novel Movement Primitive Representation for Learning Motor Skills From Demonstrations

Manschitz, S., Gienger, M., Kober, J., Peters, J.

IEEE Robotics and Automation Letters, 3(2):926-933, April 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Probabilistic movement primitives under unknown system dynamics

Paraschos, A., Rueckert, E., Peters, J., Neumann, G.

Advanced Robotics, 32(6):297-310, April 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Cause-Effect Inference by Comparing Regression Errors

Blöbaum, P., Janzing, D., Washio, T., Shimizu, S., Schölkopf, B.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) , 84, pages: 900-909, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Will People Like Your Image? Learning the Aesthetic Space

Schwarz, K., Wieschollek, P., Lensch, H. P. A.

2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages: 2048-2057, March 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
An Algorithmic Perspective on Imitation Learning

Osa, T., Pajarinen, J., Neumann, G., Bagnell, J., Abbeel, P., Peters, J.

Foundations and Trends in Robotics, 7(1-2):1-179, March 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Using Probabilistic Movement Primitives in Robotics

Paraschos, A., Daniel, C., Peters, J., Neumann, G.

Autonomous Robots, 42(3):529-551, March 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Representation of sensory uncertainty in macaque visual cortex

Goris, R., Henaff, O., Meding, K.

Computational and Systems Neuroscience (COSYNE) 2018, March 2018 (poster)

ei

[BibTex]

[BibTex]


no image
A kernel-based approach to learning contact distributions for robot manipulation tasks

Kroemer, O., Leischnig, S., Luettgen, S., Peters, J.

Autonomous Robots, 42(3):581-600, March 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM), pages: 324-332, (Editors: Yi Chang, Chengxiang Zhai, Yan Liu, and Yoelle Maarek), ACM, Febuary 2018 (conference)

ei

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


no image
Approximate Value Iteration Based on Numerical Quadrature

Vinogradska, J., Bischoff, B., Peters, J.

IEEE Robotics and Automation Letters, 3(2):1330-1337, January 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Biomimetic Tactile Sensors and Signal Processing with Spike Trains: A Review

Yi, Z., Zhang, Y., Peters, J.

Sensors and Actuators A: Physical, 269, pages: 41-52, January 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl despoina paper teaser
RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials

Paschalidou, D., Ulusoy, A. O., Schmitt, C., Gool, L., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.

avg

pdf suppmat Video Project Page code Poster Project Page [BibTex]

pdf suppmat Video Project Page code Poster Project Page [BibTex]


no image
Die kybernetische Revolution

Schölkopf, B.

15-Mar-2018, Süddeutsche Zeitung, 2018 (misc)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Functional Programming for Modular Bayesian Inference

Ścibior, A., Kammar, O., Ghahramani, Z.

Proceedings of the ACM on Functional Programming (ICFP), 2(Article No. 83):1-29, ACM, 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Design and Analysis of the NIPS 2016 Review Process

Shah*, N., Tabibian*, B., Muandet, K., Guyon, I., von Luxburg, U.

Journal of Machine Learning Research, 19(49):1-34, 2018, *equal contribution (article)

ei slt

arXiv link (url) Project Page Project Page [BibTex]

arXiv link (url) Project Page Project Page [BibTex]


no image
A Flexible Approach for Fair Classification

Zafar, M. B., Valera, I., Gomez Rodriguez, M., Gummadi, K.

Journal of Machine Learning, 2018 (article) Accepted

ei

Project Page [BibTex]

Project Page [BibTex]


no image
Automatic Bayesian Density Analysis

Vergari, A., Molina, A., Peharz, R., Ghahramani, Z., Kersting, K., Valera, I.

2018 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
A virtual reality environment for experiments in assistive robotics and neural interfaces

Bustamante, S.

Graduate School of Neural Information Processing, Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Does universal controllability of physical systems prohibit thermodynamic cycles?

Janzing, D., Wocjan, P.

Open Systems and Information Dynamics, 25(3):1850016, 2018 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Optimal Trajectory Generation and Learning Control for Robot Table Tennis

Koc, O.

Technical University Darmstadt, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Learning Causality and Causality-Related Learning: Some Recent Progress

Zhang, K., Schölkopf, B., Spirtes, P., Glymour, C.

National Science Review, 5(1):26-29, 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl yiyi paper teaser
Deep Marching Cubes: Learning Explicit Surface Representations

Liao, Y., Donne, S., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (eg, TSDF) from which 3D surface meshes must be extracted in a post-processing step (eg, via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques.

avg

pdf suppmat Video Project Page Poster Project Page [BibTex]

pdf suppmat Video Project Page Poster Project Page [BibTex]


Thumb xl teaser andreas
Semantic Visual Localization

Schönberger, J., Pollefeys, M., Geiger, A., Sattler, T.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, eg, in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Thumb xl hassan teaser paper
Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes

Alhaija, H., Mustikovela, S., Mescheder, L., Geiger, A., Rother, C.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]