Header logo is



Frame-Recurrent Video Super-Resolution
Frame-Recurrent Video Super-Resolution

Sajjadi, M. S. M., Vemulapalli, R., Brown, M.

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2018 (conference)

ei

ArXiv link (url) [BibTex]

ArXiv link (url) [BibTex]


no image
Learning Face Deblurring Fast and Wide

Jin, M., Hirsch, M., Favaro, P.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages: 745-753, June 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Wasserstein Auto-Encoders

Tolstikhin, I., Bousquet, O., Gelly, S., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Fidelity-Weighted Learning

Dehghani, M., Mehrjou, A., Gouws, S., Kamps, J., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Inducing Probabilistic Context-Free Grammars for the Sequencing of Movement Primitives

Lioutikov, R., Maeda, G., Veiga, F., Kersting, K., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 1-8, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Sobolev GAN

Mroueh, Y., Li*, C., Sercu*, T., Raj*, A., Cheng, Y.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Temporal Difference Models: Model-Free Deep RL for Model-Based Control

Pong*, V., Gu*, S., Dalal, M., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Robust Dense Mapping for Large-Scale Dynamic Environments
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page Project Page [BibTex]

pdf Video Project Page Project Page [BibTex]


no image
Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning

Eysenbach, B., Gu, S., Ibarz, J., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

Videos link (url) Project Page [BibTex]

Videos link (url) Project Page [BibTex]


Tempered Adversarial Networks
Tempered Adversarial Networks

Sajjadi, M. S. M., Parascandolo, G., Mehrjou, A., Schölkopf, B.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Learning Coupled Forward-Inverse Models with Combined Prediction Errors

Koert, D., Maeda, G., Neumann, G., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 2433-2439, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning Disentangled Representations with Wasserstein Auto-Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Automatic Estimation of Modulation Transfer Functions

Bauer, M., Volchkov, V., Hirsch, M., Schölkopf, B.

IEEE International Conference on Computational Photography (ICCP), May 2018 (conference)

ei sf

DOI [BibTex]

DOI [BibTex]


no image
Causal Discovery Using Proxy Variables

Rojas-Carulla, M., Baroni, M., Lopez-Paz, D.

Workshop at 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences

Pinsler, R., Akrour, R., Osa, T., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 596-601, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Group invariance principles for causal generative models

Besserve, M., Shajarisales, N., Schölkopf, B., Janzing, D.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 464-472, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


no image
Cause-Effect Inference by Comparing Regression Errors

Blöbaum, P., Janzing, D., Washio, T., Shimizu, S., Schölkopf, B.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) , 84, pages: 900-909, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Will People Like Your Image? Learning the Aesthetic Space

Schwarz, K., Wieschollek, P., Lensch, H. P. A.

2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages: 2048-2057, March 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM), pages: 324-332, (Editors: Yi Chang, Chengxiang Zhai, Yan Liu, and Yoelle Maarek), ACM, Febuary 2018 (conference)

ei

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials
RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials

Paschalidou, D., Ulusoy, A. O., Schmitt, C., Gool, L., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.

avg

pdf suppmat Video Project Page code Poster Project Page [BibTex]

pdf suppmat Video Project Page code Poster Project Page [BibTex]


no image
Functional Programming for Modular Bayesian Inference

Ścibior, A., Kammar, O., Ghahramani, Z.

Proceedings of the ACM on Functional Programming (ICFP), 2(Article No. 83):1-29, ACM, 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Automatic Bayesian Density Analysis

Vergari, A., Molina, A., Peharz, R., Ghahramani, Z., Kersting, K., Valera, I.

2018 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

Balles, L., Hennig, P.

In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (inproceedings) Accepted

Abstract
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.

pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Deep Marching Cubes: Learning Explicit Surface Representations
Deep Marching Cubes: Learning Explicit Surface Representations

Liao, Y., Donne, S., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (eg, TSDF) from which 3D surface meshes must be extracted in a post-processing step (eg, via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques.

avg

pdf suppmat Video Project Page Poster Project Page [BibTex]

pdf suppmat Video Project Page Poster Project Page [BibTex]


Semantic Visual Localization
Semantic Visual Localization

Schönberger, J., Pollefeys, M., Geiger, A., Sattler, T.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, eg, in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Which Training Methods for GANs do actually Converge?
Which Training Methods for GANs do actually Converge?

Mescheder, L., Geiger, A., Nowozin, S.

International Conference on Machine learning (ICML), 2018 (conference)

Abstract
Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.

avg

code video paper supplement slides poster Project Page [BibTex]


no image
k–SVRG: Variance Reduction for Large Scale Optimization

Raj, A., Stich, S.

In 2018 (inproceedings) Submitted

ei

[BibTex]

[BibTex]


no image
Probabilistic Deep Learning using Random Sum-Product Networks

Peharz, R., Vergari, A., Stelzner, K., Molina, A., Trapp, M., Kersting, K., Ghahramani, Z.

2018 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


Learning 3D Shape Completion from Laser Scan Data with Weak Supervision
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision

Stutz, D., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, ie, learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet and KITTI, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet, we additionally show that the approach is able to generalize to other object categories as well.

avg

pdf suppmat Project Page Poster Project Page [BibTex]

pdf suppmat Project Page Poster Project Page [BibTex]


no image
A Differentially Private Kernel Two-Sample Test

Raj*, A., Law*, L., Sejdinovic*, D., Park, M.

2018, *equal contribution (conference) Submitted

ei

[BibTex]

[BibTex]


Learning Transformation Invariant Representations with Weak Supervision
Learning Transformation Invariant Representations with Weak Supervision

Coors, B., Condurache, A., Mertins, A., Geiger, A.

In International Conference on Computer Vision Theory and Applications, International Conference on Computer Vision Theory and Applications, 2018 (inproceedings)

Abstract
Deep convolutional neural networks are the current state-of-the-art solution to many computer vision tasks. However, their ability to handle large global and local image transformations is limited. Consequently, extensive data augmentation is often utilized to incorporate prior knowledge about desired invariances to geometric transformations such as rotations or scale changes. In this work, we combine data augmentation with an unsupervised loss which enforces similarity between the predictions of augmented copies of an input sample. Our loss acts as an effective regularizer which facilitates the learning of transformation invariant representations. We investigate the effectiveness of the proposed similarity loss on rotated MNIST and the German Traffic Sign Recognition Benchmark (GTSRB) in the context of different classification models including ladder networks. Our experiments demonstrate improvements with respect to the standard data augmentation approach for supervised and semi-supervised learning tasks, in particular in the presence of little annotated data. In addition, we analyze the performance of the proposed approach with respect to its hyperparameters, including the strength of the regularization as well as the layer where representation similarity is enforced.

avg

pdf [BibTex]

pdf [BibTex]


no image
Denotational Validation of Higher-order Bayesian Inference

Ścibior, A., Kammar, O., Vákár, M., Staton, S., Yang, H., Cai, Y., Ostermann, K., Moss, S. K., Heunen, C., Ghahramani, Z.

Proceedings of the ACM on Principles of Programming Languages (POPL), 2(Article No. 60):1-29, ACM, 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]

2016


no image
Consistent Kernel Mean Estimation for Functions of Random Variables

Simon-Gabriel*, C. J., Ścibior*, A., Tolstikhin, I., Schölkopf, B.

Advances in Neural Information Processing Systems 29, pages: 1732-1740, (Editors: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett), Curran Associates, Inc., 30th Annual Conference on Neural Information Processing Systems, December 2016, *joint first authors (conference)

ei

link (url) Project Page Project Page Project Page [BibTex]

2016


link (url) Project Page Project Page Project Page [BibTex]


no image
Understanding Probabilistic Sparse Gaussian Process Approximations

Bauer, M., van der Wilk, M., Rasmussen, C. E.

Advances in Neural Information Processing Systems 29, pages: 1533-1541, (Editors: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett), Curran Associates, Inc., 30th Annual Conference on Neural Information Processing Systems, December 2016 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels

Tolstikhin, I., Sriperumbudur, B. K., Schölkopf, B.

Advances in Neural Information Processing Systems 29, pages: 1930-1938, (Editors: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett), Curran Associates, Inc., 30th Annual Conference on Neural Information Processing Systems, December 2016 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Local-utopia Policy Selection for Multi-objective Reinforcement Learning

Parisi, S., Blank, A., Viernickel, T., Peters, J.

In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pages: 1-7, IEEE, December 2016 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
Lifelong Learning with Weighted Majority Votes

Pentina, A., Urner, R.

Advances in Neural Information Processing Systems 29, pages: 3612-3620, (Editors: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett), Curran Associates, Inc., 30th Annual Conference on Neural Information Processing Systems, December 2016 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Active Nearest-Neighbor Learning in Metric Spaces

Kontorovich, A., Sabato, S., Urner, R.

Advances in Neural Information Processing Systems 29, pages: 856-864, (Editors: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett), Curran Associates, Inc., 30th Annual Conference on Neural Information Processing Systems, December 2016 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Catching heuristics are optimal control policies

Belousov, B., Neumann, G., Rothkopf, C., Peters, J.

Advances in Neural Information Processing Systems 29, pages: 1426-1434, (Editors: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett), Curran Associates, Inc., 30th Annual Conference on Neural Information Processing Systems, December 2016 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Incremental Imitation Learning of Context-Dependent Motor Skills

Ewerton, M., Maeda, G., Kollegger, G., Wiemeyer, J., Peters, J.

IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pages: 351-358, IEEE, November 2016 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Using Probabilistic Movement Primitives for Striking Movements

Gomez-Gonzalez, S., Neumann, G., Schölkopf, B., Peters, J.

16th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages: 502-508, November 2016 (conference)

am ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Demonstration Based Trajectory Optimization for Generalizable Robot Motions

Koert, D., Maeda, G., Lioutikov, R., Neumann, G., Peters, J.

IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pages: 351-358, IEEE, November 2016 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Jointly Learning Trajectory Generation and Hitting Point Prediction in Robot Table Tennis
Jointly Learning Trajectory Generation and Hitting Point Prediction in Robot Table Tennis

Huang, Y., Büchler, D., Koc, O., Schölkopf, B., Peters, J.

16th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages: 650-655, November 2016 (conference)

am ei

final link (url) DOI Project Page [BibTex]

final link (url) DOI Project Page [BibTex]


no image
Deep Spiking Networks for Model-based Planning in Humanoids

Tanneberg, D., Paraschos, A., Peters, J., Rueckert, E.

IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pages: 656-661, IEEE, November 2016 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Anticipative Interaction Primitives for Human-Robot Collaboration

Maeda, G., Maloo, A., Ewerton, M., Lioutikov, R., Peters, J.

AAAI Fall Symposium Series. Shared Autonomy in Research and Practice, pages: 325-330, November 2016 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]