Header logo is


2019


no image
Selecting causal brain features with a single conditional independence test per feature

Mastakouri, A., Schölkopf, B., Janzing, D.

Advances in Neural Information Processing Systems 32, 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference) Accepted

ei

[BibTex]

2019


[BibTex]


Thumb xl teaser
EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association

Strecke, M., Stückler, J.

International Conference on Computer Vision, October 2019, arXiv:1904.11781 (conference) Accepted

ev

preprint Project page Poster [BibTex]

preprint Project page Poster [BibTex]


no image
Neural Signatures of Motor Skill in the Resting Brain

Ozdenizci, O., Meyer, T., Wichmann, F., Peters, J., Schölkopf, B., Cetin, M., Grosse-Wentrup, M.

Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC 2019), October 2019 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Beta Power May Mediate the Effect of Gamma-TACS on Motor Performance

Mastakouri, A., Schölkopf, B., Grosse-Wentrup, M.

Engineering in Medicine and Biology Conference (EMBC), July 2019 (conference) Accepted

ei

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
Coordinating Users of Shared Facilities via Data-driven Predictive Assistants and Game Theory

Geiger, P., Besserve, M., Winkelmann, J., Proissl, C., Schölkopf, B.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 49, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
The Sensitivity of Counterfactual Fairness to Unmeasured Confounding

Kilbertus, N., Ball, P. J., Kusner, M. J., Weller, A., Silva, R.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 213, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
The Incomplete Rosetta Stone problem: Identifiability results for Multi-view Nonlinear ICA

Gresele*, L., Rubenstein*, P. K., Mehrjou, A., Locatello, F., Schölkopf, B.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 53, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning

Peharz, R., Vergari, A., Stelzner, K., Molina, A., Shao, X., Trapp, M., Kersting, K., Ghahramani, Z.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 124, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Kernel Mean Matching for Content Addressability of GANs

Jitkrittum*, W., Sangkloy*, P., Gondal, M. W., Raj, A., Hays, J., Schölkopf, B.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 3140-3151, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019, *equal contribution (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., Bachem, O.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 4114-4124, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Thumb xl cvpr2019 demo v2.001
Local Temporal Bilinear Pooling for Fine-grained Action Parsing

Zhang, Y., Tang, S., Muandet, K., Jarvers, C., Neumann, H.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Fine-grained temporal action parsing is important in many applications, such as daily activity understanding, human motion analysis, surgical robotics and others requiring subtle and precise operations in a long-term period. In this paper we propose a novel bilinear pooling operation, which is used in intermediate layers of a temporal convolutional encoder-decoder net. In contrast to other work, our proposed bilinear pooling is learnable and hence can capture more complex local statistics than the conventional counterpart. In addition, we introduce exact lower-dimension representations of our bilinear forms, so that the dimensionality is reduced with neither information loss nor extra computation. We perform intensive experiments to quantitatively analyze our model and show the superior performances to other state-of-the-art work on various datasets.

ei ps

Code video demo pdf link (url) [BibTex]

Code video demo pdf link (url) [BibTex]


no image
Generate Semantically Similar Images with Kernel Mean Matching

Jitkrittum*, W., Sangkloy*, P., Gondal, M. W., Raj, A., Hays, J., Schölkopf, B.

6th Workshop Women in Computer Vision (WiCV) (oral presentation), June 2019, *equal contribution (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Projections for Approximate Policy Iteration Algorithms

Akrour, R., Pajarinen, J., Peters, J., Neumann, G.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 181-190, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Switching Linear Dynamics for Variational Bayes Filtering

Becker-Ehmck, P., Peters, J., van der Smagt, P.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 553-562, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness

Suter, R., Miladinovic, D., Schölkopf, B., Bauer, S.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 6056-6065, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension

Simon-Gabriel, C., Ollivier, Y., Bottou, L., Schölkopf, B., Lopez-Paz, D.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 5809-5817, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models

Ialongo, A. D., Van Der Wilk, M., Hensman, J., Rasmussen, C. E.

In Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 2931-2940, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (inproceedings)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Meta learning variational inference for prediction

Gordon, J., Bronskill, J., Bauer, M., Nowozin, S., Turner, R.

7th International Conference on Learning Representations (ICLR), May 2019 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning

Lutter, M., Ritter, C., Peters, J.

7th International Conference on Learning Representations (ICLR), May 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
DeepOBS: A Deep Learning Optimizer Benchmark Suite

Schneider, F., Balles, L., Hennig, P.

7th International Conference on Learning Representations (ICLR), May 2019 (conference)

ei pn

link (url) [BibTex]

link (url) [BibTex]


no image
Disentangled State Space Models: Unsupervised Learning of Dynamics across Heterogeneous Environments

Miladinović*, D., Gondal*, M. W., Schölkopf, B., Buhmann, J. M., Bauer, S.

Deep Generative Models for Highly Structured Data Workshop at ICLR, May 2019, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
SOM-VAE: Interpretable Discrete Representation Learning on Time Series

Fortuin, V., Hüser, M., Locatello, F., Strathmann, H., Rätsch, G.

7th International Conference on Learning Representations (ICLR), May 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Resampled Priors for Variational Autoencoders

Bauer, M., Mnih, A.

22nd International Conference on Artificial Intelligence and Statistics, April 2019 (conference) Accepted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Semi-Generative Modelling: Covariate-Shift Adaptation with Cause and Effect Features

von Kügelgen, J., Mey, A., Loog, M.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1361-1369, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Sobolev Descent

Mroueh, Y., Sercu, T., Raj, A.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 2976-2985, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Fast and Robust Shortest Paths on Manifolds Learned from Data

Arvanitidis, G., Hauberg, S., Hennig, P., Schober, M.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1506-1515, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

ei pn

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Thumb xl 543 figure0 1
Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

de Roos, F., Hennig, P.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1448-1457, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

Abstract
Pre-conditioning is a well-known concept that can significantly improve the convergence of optimization algorithms. For noise-free problems, where good pre-conditioners are not known a priori, iterative linear algebra methods offer one way to efficiently construct them. For the stochastic optimization problems that dominate contemporary machine learning, however, this approach is not readily available. We propose an iterative algorithm inspired by classic iterative linear solvers that uses a probabilistic model to actively infer a pre-conditioner in situations where Hessian-projections can only be constructed with strong Gaussian noise. The algorithm is empirically demonstrated to efficiently construct effective pre-conditioners for stochastic gradient descent and its variants. Experiments on problems of comparably low dimensionality show improved convergence. In very high-dimensional problems, such as those encountered in deep learning, the pre-conditioner effectively becomes an automatic learning-rate adaptation scheme, which we also empirically show to work well.

pn ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Fast Gaussian Process Based Gradient Matching for Parameter Identification in Systems of Nonlinear ODEs

Wenk, P., Gotovos, A., Bauer, S., Gorbach, N., Krause, A., Buhmann, J. M.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1351-1360, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

ei

PDF PDF link (url) [BibTex]

PDF PDF link (url) [BibTex]


no image
AReS and MaRS Adversarial and MMD-Minimizing Regression for SDEs

Abbati*, G., Wenk*, P., Osborne, M. A., Krause, A., Schölkopf, B., Bauer, S.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 1-10, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, 2019, *equal contribution (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Kernel Stein Tests for Multiple Model Comparison

Lim, J. N., Yamada, M., Schölkopf, B., Jitkrittum, W.

Advances in Neural Information Processing Systems 32, 33rd Annual Conference on Neural Information Processing Systems, 2019 (conference) To be published

ei

[BibTex]

[BibTex]


no image
MYND: A Platform for Large-scale Neuroscientific Studies

Hohmann, M. R., Hackl, M., Wirth, B., Zaman, T., Enficiaud, R., Grosse-Wentrup, M., Schölkopf, B.

Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI), 2019 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
A Kernel Stein Test for Comparing Latent Variable Models

Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., Gretton, A.

2019 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Learning to Disentangle Latent Physical Factors for Video Prediction

Zhu, D., Munderloh, M., Rosenhahn, B., Stückler, J.

In German Conference on Pattern Recognition (GCPR), 2019, to appear (inproceedings)

ev

dataset & evaluation code video preprint [BibTex]

dataset & evaluation code video preprint [BibTex]


no image
Robust Humanoid Locomotion Using Trajectory Optimization and Sample-Efficient Learning

Yeganegi, M. H., Khadiv, M., Moosavian, S. A. A., Zhu, J., Prete, A. D., Righetti, L.

Proceedings International Conference on Humanoid Robots, IEEE, 2019 IEEE-RAS International Conference on Humanoid Robots, 2019 (conference)

Abstract
Trajectory optimization (TO) is one of the most powerful tools for generating feasible motions for humanoid robots. However, including uncertainties and stochasticity in the TO problem to generate robust motions can easily lead to intractable problems. Furthermore, since the models used in TO have always some level of abstraction, it can be hard to find a realistic set of uncertainties in the model space. In this paper we leverage a sample-efficient learning technique (Bayesian optimization) to robustify TO for humanoid locomotion. The main idea is to use data from full-body simulations to make the TO stage robust by tuning the cost weights. To this end, we split the TO problem into two phases. The first phase solves a convex optimization problem for generating center of mass (CoM) trajectories based on simplified linear dynamics. The second stage employs iterative Linear-Quadratic Gaussian (iLQG) as a whole-body controller to generate full body control inputs. Then we use Bayesian optimization to find the cost weights to use in the first stage that yields robust performance in the simulation/experiment, in the presence of different disturbance/uncertainties. The results show that the proposed approach is able to generate robust motions for different sets of disturbances and uncertainties.

mg

https://arxiv.org/abs/1907.04616 [BibTex]

https://arxiv.org/abs/1907.04616 [BibTex]


Thumb xl rae
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

2019, *equal contribution (conference) Submitted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

ei ps

arXiv [BibTex]


no image
3D Birds-Eye-View Instance Segmentation

Elich, C., Engelmann, F., Kontogianni, T., Leibe, B.

In German Conference on Pattern Recognition (GCPR), 2019, arXiv:1904.02199, to appear (inproceedings)

ev

[BibTex]

[BibTex]


no image
Fisher Efficient Inference of Intractable Models

Liu, S., Kanamori, T., Jitkrittum, W., Chen, Y.

Advances in Neural Information Processing Systems 32, 33rd Annual Conference on Neural Information Processing Systems, 2019 (conference) To be published

ei

arXiv [BibTex]

arXiv [BibTex]

2009


no image
A computational model of human table tennis for robot application

Mülling, K., Peters, J.

In AMS 2009, pages: 57-64, (Editors: Dillmann, R. , J. Beyerer, C. Stiller, M. Zöllner, T. Gindele), Springer, Berlin, Germany, Autonome Mobile Systeme, December 2009 (inproceedings)

Abstract
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a BarrettWAM.

ei

Web DOI [BibTex]

2009


Web DOI [BibTex]


no image
A PAC-Bayesian Approach to Formulation of Clustering Objectives

Seldin, Y., Tishby, N.

In Proceedings of the NIPS 2009 Workshop "Clustering: Science or Art? Towards Principled Approaches", pages: 1-4, NIPS Workshop "Clustering: Science or Art? Towards Principled Approaches", December 2009 (inproceedings)

Abstract
Clustering is a widely used tool for exploratory data analysis. However, the theoretical understanding of clustering is very limited. We still do not have a well-founded answer to the seemingly simple question of “how many clusters are present in the data?”, and furthermore a formal comparison of clusterings based on different optimization objectives is far beyond our abilities. The lack of good theoretical support gives rise to multiple heuristics that confuse the practitioners and stall development of the field. We suggest that the ill-posed nature of clustering problems is caused by the fact that clustering is often taken out of its subsequent application context. We argue that one does not cluster the data just for the sake of clustering it, but rather to facilitate the solution of some higher level task. By evaluation of the clustering’s contribution to the solution of the higher level task it is possible to compare different clusterings, even those obtained by different optimization objectives. In the preceding work it was shown that such an approach can be applied to evaluation and design of co-clustering solutions. Here we suggest that this approach can be extended to other settings, where clustering is applied.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Notes on Graph Cuts with Submodular Edge Weights

Jegelka, S., Bilmes, J.

In pages: 1-6, NIPS Workshop on Discrete Optimization in Machine Learning: Submodularity, Sparsity & Polyhedra (DISCML), December 2009 (inproceedings)

Abstract
Generalizing the cost in the standard min-cut problem to a submodular cost function immediately makes the problem harder. Not only do we prove NP hardness even for nonnegative submodular costs, but also show a lower bound of (|V |1/3) on the approximation factor for the (s, t) cut version of the problem. On the positive side, we propose and compare three approximation algorithms with an overall approximation factor of O(min{|V |,p|E| log |V |}) that appear to do well in practice.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning new basic Movements for Robotics

Kober, J., Peters, J.

In AMS 2009, pages: 105-112, (Editors: Dillmann, R. , J. Beyerer, C. Stiller, M. Zöllner, T. Gindele), Springer, Berlin, Germany, Autonome Mobile Systeme, December 2009 (inproceedings)

Abstract
Obtaining novel skills is one of the most important problems in robotics. Machine learning techniques may be a promising approach for automatic and autonomous acquisition of movement policies. However, this requires both an appropriate policy representation and suitable learning algorithms. Employing the most recent form of the dynamical systems motor primitives originally introduced by Ijspeert et al. [1], we show how both discrete and rhythmic tasks can be learned using a concerted approach of both imitation and reinforcement learning, and present our current best performing learning algorithms. Finally, we show that it is possible to include a start-up phase in rhythmic primitives. We apply our approach to two elementary movements, i.e., Ball-in-a-Cup and Ball-Paddling, which can be learned on a real Barrett WAM robot arm at a pace similar to human learning.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
From Motor Learning to Interaction Learning in Robots

Sigaud, O., Peters, J.

In Proceedings of 7ème Journées Nationales de la Recherche en Robotique, pages: 189-195, JNRR, November 2009 (inproceedings)

Abstract
The number of advanced robot systems has been increasing in recent years yielding a large variety of versatile designs with many degrees of freedom. These robots have the potential of being applicable in uncertain tasks outside well-structured industrial settings. However, the complexity of both systems and tasks is often beyond the reach of classical robot programming methods. As a result, a more autonomous solution for robot task acquisition is needed where robots adaptively adjust their behaviour to the encountered situations and required tasks. Learning approaches pose one of the most appealing ways to achieve this goal. However, while learning approaches are of high importance for robotics, we cannot simply use off-the-shelf methods from the machine learning community as these usually do not scale into the domains of robotics due to excessive computational cost as well as a lack of scalability. Instead, domain appropriate approaches are needed. We focus here on several core domains of robot learning. For accurate task execution, we need motor learning capabilities. For fast learning of the motor tasks, imitation learning offers the most promising approach. Self improvement requires reinforcement learning approaches that scale into the domain of complex robots. Finally, for efficient interaction of humans with robot systems, we will need a form of interaction learning. This contribution provides a general introduction to these issues and briefly presents the contributions of the related book chapters to the corresponding research topics.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Detecting Objects in Large Image Collections and Videos by Efficient Subimage Retrieval

Lampert, CH.

In ICCV 2009, pages: 987-994, IEEE Computer Society, Piscataway, NJ, USA, Twelfth IEEE International Conference on Computer Vision, October 2009 (inproceedings)

Abstract
We study the task of detecting the occurrence of objects in large image collections or in videos, a problem that combines aspects of content based image retrieval and object localization. While most previous approaches are either limited to special kinds of queries, or do not scale to large image sets, we propose a new method, efficient subimage retrieval (ESR), which is at the same time very flexible and very efficient. Relying on a two-layered branch-and-bound setup, ESR performs object-based image retrieval in sets of 100,000 or more images within seconds. An extensive evaluation on several datasets shows that ESR is not only very fast, but it also achieves detection accuracies that are on par with or superior to previously published methods for object-based image retrieval.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A new non-monotonic algorithm for PET image reconstruction

Sra, S., Kim, D., Dhillon, I., Schölkopf, B.

In IEEE - Nuclear Science Symposium Conference Record (NSS/MIC), 2009, pages: 2500-2502, (Editors: B Yu), IEEE, Piscataway, NJ, USA, IEEE Nuclear Science Symposium and Medical Imaging Conference, October 2009 (inproceedings)

Abstract
Maximizing some form of Poisson likelihood (either with or without penalization) is central to image reconstruction algorithms in emission tomography. In this paper we introduce NMML, a non-monotonic algorithm for maximum likelihood PET image reconstruction. NMML offers a simple and flexible procedure that also easily incorporates standard convex regular-ization for doing penalized likelihood estimation. A vast number image reconstruction algorithms have been developed for PET, and new ones continue to be designed. Among these, methods based on the expectation maximization (EM) and ordered-subsets (OS) framework seem to have enjoyed the greatest popularity. Our method NMML differs fundamentally from methods based on EM: i) it does not depend on the concept of optimization transfer (or surrogate functions); and ii) it is a rapidly converging nonmonotonic descent procedure. The greatest strengths of NMML, however, are its simplicity, efficiency, and scalability, which make it especially attractive for tomograph ic reconstruction. We provide a theoretical analysis NMML, and empirically observe it to outperform standard EM based methods, sometimes by orders of magnitude. NMML seamlessly allows integreation of penalties (regularizers) in the likelihood. This ability can prove to be crucial, especially because with the rapidly rising importance of combined PET/MR scanners, one will want to include more “prior” knowledge into the reconstruction.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Approximation Algorithms for Tensor Clustering

Jegelka, S., Sra, S., Banerjee, A.

In Algorithmic Learning Theory: 20th International Conference, pages: 368-383, (Editors: Gavalda, R. , G. Lugosi, T. Zeugmann, S. Zilles), Springer, Berlin, Germany, ALT, October 2009 (inproceedings)

Abstract
We present the first (to our knowledge) approximation algo- rithm for tensor clustering—a powerful generalization to basic 1D clustering. Tensors are increasingly common in modern applications dealing with complex heterogeneous data and clustering them is a fundamental tool for data analysis and pattern discovery. Akin to their 1D cousins, common tensor clustering formulations are NP-hard to optimize. But, unlike the 1D case no approximation algorithms seem to be known. We address this imbalance and build on recent co-clustering work to derive a tensor clustering algorithm with approximation guarantees, allowing metrics and divergences (e.g., Bregman) as objective functions. Therewith, we answer two open questions by Anagnostopoulos et al. (2008). Our analysis yields a constant approximation factor independent of data size; a worst-case example shows this factor to be tight for Euclidean co-clustering. However, empirically the approximation factor is observed to be conservative, so our method can also be used in practice.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Active learning using mean shift optimization for robot grasping

Kroemer, O., Detry, R., Piater, J., Peters, J.

In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), pages: 2610-2615, IEEE Service Center, Piscataway, NJ, USA, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2009 (inproceedings)

Abstract
When children learn to grasp a new object, they often know several possible grasping points from observing a parent‘s demonstration and subsequently learn better grasps by trial and error. From a machine learning point of view, this process is an active learning approach. In this paper, we present a new robot learning framework for reproducing this ability in robot grasping. For doing so, we chose a straightforward approach: first, the robot observes a few good grasps by demonstration and learns a value function for these grasps using Gaussian process regression. Subsequently, it chooses grasps which are optimal with respect to this value function using a mean-shift optimization approach, and tries them out on the real system. Upon every completed trial, the value function is updated, and in the following trials it is more likely to choose even better grasping points. This method exhibits fast learning due to the data-efficiency of Gaussian process regression framework and the fact th at t he mean-shift method provides maxima of this cost function. Experiments were repeatedly carried out successfully on a real robot system. After less than sixty trials, our system has adapted its grasping policy to consistently exhibit successful grasps.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Sparse online model learning for robot control with support vector regression

Nguyen-Tuong, D., Schölkopf, B., Peters, J.

In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), pages: 3121-3126, IEEE Service Center, Piscataway, NJ, USA, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2009 (inproceedings)

Abstract
The increasing complexity of modern robots makes it prohibitively hard to accurately model such systems as required by many applications. In such cases, machine learning methods offer a promising alternative for approximating such models using measured data. To date, high computational demands have largely restricted machine learning techniques to mostly offline applications. However, making the robots adaptive to changes in the dynamics and to cope with unexplored areas of the state space requires online learning. In this paper, we propose an approximation of the support vector regression (SVR) by sparsification based on the linear independency of training data. As a result, we obtain a method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques, such as nu-SVR, Gaussian process regression (GPR) and locally weighted projection regression (LWPR).

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Implicit Wiener Series Analysis of Epileptic Seizure Recordings

Barbero, A., Franz, M., Drongelen, W., Dorronsoro, J., Schölkopf, B., Grosse-Wentrup, M.

In EMBC 2009, pages: 5304-5307, (Editors: Y Kim and B He and G Worrell and X Pan), IEEE Service Center, Piscataway, NJ, USA, 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society, September 2009 (inproceedings)

Abstract
Implicit Wiener series are a powerful tool to build Volterra representations of time series with any degree of nonlinearity. A natural question is then whether higher order representations yield more useful models. In this work we shall study this question for ECoG data channel relationships in epileptic seizure recordings, considering whether quadratic representations yield more accurate classifiers than linear ones. To do so we first show how to derive statistical information on the Volterra coefficient distribution and how to construct seizure classification patterns over that information. As our results illustrate, a quadratic model seems to provide no advantages over a linear one. Nevertheless, we shall also show that the interpretability of the implicit Wiener series provides insights into the inter-channel relationships of the recordings.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Incorporating Prior Knowledge on Class Probabilities into Local Similarity Measures for Intermodality Image Registration

Hofmann, M., Schölkopf, B., Bezrukov, I., Cahill, N.

In Proceedings of the MICCAI 2009 Workshop on Probabilistic Models for Medical Image Analysis , pages: 220-231, (Editors: W Wells and S Joshi and K Pohl), PMMIA, September 2009 (inproceedings)

Abstract
We present a methodology for incorporating prior knowledge on class probabilities into the registration process. By using knowledge from the imaging modality, pre-segmentations, and/or probabilistic atlases, we construct vectors of class probabilities for each image voxel. By defining new image similarity measures for distribution-valued images, we show how the class probability images can be nonrigidly registered in a variational framework. An experiment on nonrigid registration of MR and CT full-body scans illustrates that the proposed technique outperforms standard mutual information (MI) and normalized mutual information (NMI) based registration techniques when measured in terms of target registration error (TRE) of manually labeled fiducials.

ei

PDF Web [BibTex]

PDF Web [BibTex]