Header logo is


2016


no image
Contextual Policy Search for Linear and Nonlinear Generalization of a Humanoid Walking Controller

Abdolmaleki, A., Lau, N., Reis, L., Peters, J., Neumann, G.

Journal of Intelligent & Robotic Systems, 83(3-4):393-408, (Editors: Luis Almeida, Lino Marques ), September 2016, Special Issue: Autonomous Robot Systems (article)

ei

DOI [BibTex]

2016


DOI [BibTex]


no image
Acquiring and Generalizing the Embodiment Mapping from Human Observations to Robot Skills

Maeda, G., Ewerton, M., Koert, D., Peters, J.

IEEE Robotics and Automation Letters, 1(2):784-791, July 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
On estimation of functional causal models: General results and application to post-nonlinear causal model

Zhang, K., Wang, Z., Zhang, J., Schölkopf, B.

ACM Transactions on Intelligent Systems and Technologies, 7(2):article no. 13, January 2016 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


Thumb xl cloud tracking
Gaussian Process-Based Predictive Control for Periodic Error Correction

Klenske, E. D., Zeilinger, M., Schölkopf, B., Hennig, P.

IEEE Transactions on Control Systems Technology , 24(1):110-121, 2016 (article)

ei pn

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Pymanopt: A Python Toolbox for Optimization on Manifolds using Automatic Differentiation

Townsend, J., Koep, N., Weichwald, S.

Journal of Machine Learning Research, 17(137):1-5, 2016 (article)

ei

PDF Arxiv Code Project page link (url) [BibTex]


no image
A Causal, Data-driven Approach to Modeling the Kepler Data

Wang, D., Hogg, D. W., Foreman-Mackey, D., Schölkopf, B.

Publications of the Astronomical Society of the Pacific, 128(967):094503, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Probabilistic Inference for Determining Options in Reinforcement Learning

Daniel, C., van Hoof, H., Peters, J., Neumann, G.

Machine Learning, Special Issue, 104(2):337-357, (Editors: Gärtner, T., Nanni, M., Passerini, A. and Robardet, C.), European Conference on Machine Learning im Machine Learning, Journal Track, 2016, Best Student Paper Award of ECML-PKDD 2016 (article)

am ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Influence of initial fixation position in scene viewing

Rothkegel, L. O. M., Trukenbrod, H. A., Schütt, H. H., Wichmann, F. A., Engbert, R.

Vision Research, 129, pages: 33-49, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Testing models of peripheral encoding using metamerism in an oddity paradigm

Wallis, T. S. A., Bethge, M., Wichmann, F. A.

Journal of Vision, 16(2), 2016 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Modeling Confounding by Half-Sibling Regression

Schölkopf, B., Hogg, D., Wang, D., Foreman-Mackey, D., Janzing, D., Simon-Gabriel, C. J., Peters, J.

Proceedings of the National Academy of Science, 113(27):7391-7398, 2016 (article)

ei

Code link (url) DOI Project Page [BibTex]

Code link (url) DOI Project Page [BibTex]


Thumb xl dual control sampled b
Dual Control for Approximate Bayesian Reinforcement Learning

Klenske, E. D., Hennig, P.

Journal of Machine Learning Research, 17(127):1-30, 2016 (article)

ei pn

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
A Population Based Gaussian Mixture Model Incorporating 18F-FDG-PET and DW-MRI Quantifies Tumor Tissue Classes

Divine, M. R., Katiyar, P., Kohlhofer, U., Quintanilla-Martinez, L., Disselhorst, J. A., Pichler, B. J.

Journal of Nuclear Medicine, 57(3):473-479, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data

Schütt, H. H., Harmeling, S., Macke, J. H., Wichmann, F. A.

Vision Research, 122, pages: 105-123, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Hierarchical Relative Entropy Policy Search

Daniel, C., Neumann, G., Kroemer, O., Peters, J.

Journal of Machine Learning Research, 17(93):1-50, 2016 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Kernel Mean Shrinkage Estimators

Muandet, K., Sriperumbudur, B., Fukumizu, K., Gretton, A., Schölkopf, B.

Journal of Machine Learning Research, 17(48):1-41, 2016 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning to Deblur

Schuler, C. J., Hirsch, M., Harmeling, S., Schölkopf, B.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7):1439-1451, IEEE, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Transfer Learning in Brain-Computer Interfaces

Jayaram, V., Alamgir, M., Altun, Y., Schölkopf, B., Grosse-Wentrup, M.

IEEE Computational Intelligence Magazine, 11(1):20-31, 2016 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
MERLiN: Mixture Effect Recovery in Linear Networks

Weichwald, S., Grosse-Wentrup, M., Gretton, A.

IEEE Journal of Selected Topics in Signal Processing, 10(7):1254-1266, 2016 (article)

ei

Arxiv Code PDF DOI Project Page [BibTex]

Arxiv Code PDF DOI Project Page [BibTex]


no image
Causal inference using invariant prediction: identification and confidence intervals

Peters, J., Bühlmann, P., Meinshausen, N.

Journal of the Royal Statistical Society, Series B (Statistical Methodology), 78(5):947-1012, 2016, (with discussion) (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Causal discovery and inference: concepts and recent methodological advances

Spirtes, P., Zhang, K.

Applied Informatics, 3(3):1-28, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Self-regulation of brain rhythms in the precuneus: a novel BCI paradigm for patients with ALS

Fomina, T., Lohmann, G., Erb, M., Ethofer, T., Schölkopf, B., Grosse-Wentrup, M.

Journal of Neural Engineering, 13(6):066021, 2016 (article)

ei

link (url) Project Page [BibTex]


no image
Influence Estimation and Maximization in Continuous-Time Diffusion Networks

Gomez-Rodriguez, M., Song, L., Du, N., Zha, H., Schölkopf, B.

ACM Transactions on Information Systems, 34(2):9:1-9:33, 2016 (article)

ei

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


no image
The population of long-period transiting exoplanets

Foreman-Mackey, D., Morton, T. D., Hogg, D. W., Agol, E., Schölkopf, B.

The Astronomical Journal, 152(6):206, 2016 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Screening Rules for Convex Problems

Raj, A., Olbrich, J., Gärtner, B., Schölkopf, B., Jaggi, M.

2016 (unpublished) Submitted

ei

[BibTex]

[BibTex]


no image
An overview of quantitative approaches in Gestalt perception

Jäkel, F., Singh, M., Wichmann, F. A., Herzog, M. H.

Vision Research, 126, pages: 3-8, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Bootstrat: Population Informed Bootstrapping for Rare Variant Tests

Huang, H., Peloso, G. M., Howrigan, D., Rakitsch, B., Simon-Gabriel, C. J., Goldstein, J. I., Daly, M. J., Borgwardt, K., Neale, B. M.

bioRxiv, 2016, preprint (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Probabilistic Movement Models Show that Postural Control Precedes and Predicts Volitional Motor Control

Rueckert, E., Camernik, J., Peters, J., Babic, J.

Nature PG: Scientific Reports, 6(Article number: 28455), 2016 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning Taxonomy Adaptation in Large-scale Classification

Babbar, R., Partalas, I., Gaussier, E., Amini, M., Amblard, C.

Journal of Machine Learning Research, 17(98):1-37, 2016 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
BOiS—Berlin Object in Scene Database: Controlled Photographic Images for Visual Search Experiments with Quantified Contextual Priors

Mohr, J., Seyfarth, J., Lueschow, A., Weber, J. E., Wichmann, F. A., Obermayer, K.

Frontiers in Psychology, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Preface to the ACM TIST Special Issue on Causal Discovery and Inference

Zhang, K., Li, J., Bareinboim, E., Schölkopf, B., Pearl, J.

ACM Transactions on Intelligent Systems and Technologies, 7(2):article no. 17, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Recurrent Spiking Networks Solve Planning Tasks

Rueckert, E., Kappel, D., Tanneberg, D., Pecevski, D., Peters, J.

Nature PG: Scientific Reports, 6(Article number: 21142), 2016 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations

Genewein, T, Braun, DA

Biological Cybernetics, 110(2):135–150, June 2016 (article)

Abstract
Bayesian inference and bounded rational decision-making require the accumulation of evidence or utility, respectively, to transform a prior belief or strategy into a posterior probability distribution over hypotheses or actions. Crucially, this process cannot be simply realized by independent integrators, since the different hypotheses and actions also compete with each other. In continuous time, this competitive integration process can be described by a special case of the replicator equation. Here we investigate simple analog electric circuits that implement the underlying differential equation under the constraint that we only permit a limited set of building blocks that we regard as biologically interpretable, such as capacitors, resistors, voltage-dependent conductances and voltage- or current-controlled current and voltage sources. The appeal of these circuits is that they intrinsically perform normalization without requiring an explicit divisive normalization. However, even in idealized simulations, we find that these circuits are very sensitive to internal noise as they accumulate error over time. We discuss in how far neural circuits could implement these operations that might provide a generic competitive principle underlying both perception and action.

ei

DOI [BibTex]

DOI [BibTex]


no image
Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context: Experiments and an Information-Theoretic Ambiguity Model

Grau-Moya, J, Ortega, PA, Braun, DA

PLoS ONE, 11(4):1-21, April 2016 (article)

Abstract
A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects’ choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects’ choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.

ei

DOI [BibTex]

2003


no image
Concentration Inequalities for Sub-Additive Functions Using the Entropy Method

Bousquet, O.

Stochastic Inequalities and Applications, 56, pages: 213-247, Progress in Probability, (Editors: Giné, E., C. Houdré and D. Nualart), November 2003 (article)

Abstract
We obtain exponential concentration inequalities for sub-additive functions of independent random variables under weak conditions on the increments of those functions, like the existence of exponential moments for these increments. As a consequence of these general inequalities, we obtain refinements of Talagrand's inequality for empirical processes and new bounds for randomized empirical processes. These results are obtained by further developing the entropy method introduced by Ledoux.

ei

PostScript [BibTex]

2003


PostScript [BibTex]


no image
Statistical Learning Theory, Capacity and Complexity

Schölkopf, B.

Complexity, 8(4):87-94, July 2003 (article)

Abstract
We give an exposition of the ideas of statistical learning theory, followed by a discussion of how a reinterpretation of the insights of learning theory could potentially also benefit our understanding of a certain notion of complexity.

ei

Web DOI [BibTex]


no image
Dealing with large Diagonals in Kernel Matrices

Weston, J., Schölkopf, B., Eskin, E., Leslie, C., Noble, W.

Annals of the Institute of Statistical Mathematics, 55(2):391-408, June 2003 (article)

Abstract
In kernel methods, all the information about the training data is contained in the Gram matrix. If this matrix has large diagonal values, which arises for many types of kernels, then kernel methods do not perform well: We propose and test several methods for dealing with this problem by reducing the dynamic range of the matrix while preserving the positive definiteness of the Hessian of the quadratic programming problem that one has to solve when training a Support Vector Machine, which is a common kernel approach for pattern recognition.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
The em Algorithm for Kernel Matrix Completion with Auxiliary Data

Tsuda, K., Akaho, S., Asai, K.

Journal of Machine Learning Research, 4, pages: 67-81, May 2003 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
Constructing Descriptive and Discriminative Non-linear Features: Rayleigh Coefficients in Kernel Feature Spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5):623-628, May 2003 (article)

Abstract
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher‘s discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach.

ei

DOI [BibTex]

DOI [BibTex]


no image
Tractable Inference for Probabilistic Data Models

Csato, L., Opper, M., Winther, O.

Complexity, 8(4):64-68, April 2003 (article)

Abstract
We present an approximation technique for probabilistic data models with a large number of hidden variables, based on ideas from statistical physics. We give examples for two nontrivial applications. © 2003 Wiley Periodicals, Inc.

ei

PDF GZIP Web [BibTex]

PDF GZIP Web [BibTex]


no image
Feature selection and transduction for prediction of molecular bioactivity for drug design

Weston, J., Perez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Schölkopf, B.

Bioinformatics, 19(6):764-771, April 2003 (article)

Abstract
Motivation: In drug discovery a key task is to identify characteristics that separate active (binding) compounds from inactive (non-binding) ones. An automated prediction system can help reduce resources necessary to carry out this task. Results: Two methods for prediction of molecular bioactivity for drug design are introduced and shown to perform well in a data set previously studied as part of the KDD (Knowledge Discovery and Data Mining) Cup 2001. The data is characterized by very few positive examples, a very large number of features (describing three-dimensional properties of the molecules) and rather different distributions between training and test data. Two techniques are introduced specifically to tackle these problems: a feature selection method for unbalanced data and a classifier which adapts to the distribution of the the unlabeled test data (a so-called transductive method). We show both techniques improve identification performance and in conjunction provide an improvement over using only one of the techniques. Our results suggest the importance of taking into account the characteristics in this data which may also be relevant in other problems of a similar type.

ei

Web [BibTex]


no image
Use of the Zero-Norm with Linear Models and Kernel Methods

Weston, J., Elisseeff, A., Schölkopf, B., Tipping, M.

Journal of Machine Learning Research, 3, pages: 1439-1461, March 2003 (article)

Abstract
We explore the use of the so-called zero-norm of the parameters of linear models in learning. Minimization of such a quantity has many uses in a machine learning context: for variable or feature selection, minimizing training error and ensuring sparsity in solutions. We derive a simple but practical method for achieving these goals and discuss its relationship to existing techniques of minimizing the zero-norm. The method boils down to implementing a simple modification of vanilla SVM, namely via an iterative multiplicative rescaling of the training data. Applications we investigate which aid our discussion include variable and feature selection on biological microarray data, and multicategory classification.

ei

PDF PostScript PDF [BibTex]

PDF PostScript PDF [BibTex]


no image
An Introduction to Variable and Feature Selection.

Guyon, I., Elisseeff, A.

Journal of Machine Learning, 3, pages: 1157-1182, 2003 (article)

ei

[BibTex]

[BibTex]


no image
New Approaches to Statistical Learning Theory

Bousquet, O.

Annals of the Institute of Statistical Mathematics, 55(2):371-389, 2003 (article)

Abstract
We present new tools from probability theory that can be applied to the analysis of learning algorithms. These tools allow to derive new bounds on the generalization performance of learning algorithms and to propose alternative measures of the complexity of the learning task, which in turn can be used to derive new learning algorithms.

ei

PostScript [BibTex]

PostScript [BibTex]