Header logo is


2011


no image
Estimating integrated information with TMS pulses during wakefulness, sleep and under anesthesia

Balduzzi, D.

In pages: 4717-4720 , IEEE, Piscataway, NJ, USA, 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE EMBC), September 2011 (inproceedings)

Abstract
This paper relates a recently proposed measure of information integration to experiments investigating the evoked high-density electroencephalography (EEG) response to transcranial magnetic stimulation (TMS) during wakefulness, early non-rapid eye movement (NREM) sleep and under anesthesia. We show that bistability, arising at the cellular and population level during NREM sleep and under anesthesia, dramatically reduces the brain’s ability to integrate information.

ei

PDF Web DOI [BibTex]

2011


PDF Web DOI [BibTex]


no image
Improving Denoising Algorithms via a Multi-scale Meta-procedure

Burger, H., Harmeling, S.

In Pattern Recognition, pages: 206-215, (Editors: Mester, R. , M. Felsberg), Springer, Berlin, Germany, 33rd DAGM Symposium, September 2011 (inproceedings)

Abstract
Many state-of-the-art denoising algorithms focus on recovering high-frequency details in noisy images. However, images corrupted by large amounts of noise are also degraded in the lower frequencies. Thus properly handling all frequency bands allows us to better denoise in such regimes. To improve existing denoising algorithms we propose a meta-procedure that applies existing denoising algorithms across different scales and combines the resulting images into a single denoised image. With a comprehensive evaluation we show that the performance of many state-of-the-art denoising algorithms can be improved.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Multiple reference genomes and transcriptomes for Arabidopsis thaliana

Gan, X., Stegle, O., Behr, J., Steffen, J., Drewe, P., Hildebrand, K., Lyngsoe, R., Schultheiss, S., Osborne, E., Sreedharan, V., Kahles, A., Bohnert, R., Jean, G., Derwent, P., Kersey, P., Belfield, E., Harberd, N., Kemen, E., Toomajian, C., Kover, P., Clark, R., Rätsch, G., Mott, R.

Nature, 477(7365):419–423, September 2011 (article)

Abstract
Genetic differences between Arabidopsis thaliana accessions underlie the plant’s extensive phenotypic variation, and until now these have been interpreted largely in the context of the annotated reference accession Col-0. Here we report the sequencing, assembly and annotation of the genomes of 18 natural A. thaliana accessions, and their transcriptomes. When assessed on the basis of the reference annotation, one-third of protein-coding genes are predicted to be disrupted in at least one accession. However, re-annotation of each genome revealed that alternative gene models often restore coding potential. Gene expression in seedlings differed for nearly half of expressed genes and was frequently associated with cis variants within 5 kilobases, as were intron retention alternative splicing events. Sequence and expression variation is most pronounced in genes that respond to the biotic environment. Our data further promote evolutionary and functional studies in A. thaliana, especially the MAGIC genetic reference population descended from these accessions.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Weisfeiler-Lehman Graph Kernels

Shervashidze, N., Schweitzer, P., van Leeuwen, E., Mehlhorn, K., Borgwardt, M.

Journal of Machine Learning Research, 12, pages: 2539-2561, September 2011 (article)

Abstract
In this article, we propose a family of efficient kernels for large graphs with discrete node labels. Key to our method is a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. A family of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly efficient kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of edges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classification benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale applications of graph kernels in various disciplines such as computational biology and social network analysis.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
What are the Causes of Performance Variation in Brain-Computer Interfacing?

Grosse-Wentrup, M.

International Journal of Bioelectromagnetism, 13(3):115-116, September 2011 (article)

Abstract
While research on brain-computer interfacing (BCI) has seen tremendous progress in recent years, performance still varies substantially between as well as within subjects, with roughly 10 - 20% of subjects being incapable of successfully operating a BCI system. In this short report, I argue that this variation in performance constitutes one of the major obstacles that impedes a successful commercialization of BCI systems. I review the current state of research on the neuro-physiological causes of performance variation in BCI, discuss recent progress and open problems, and delineate potential research programs for addressing this issue.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning robot grasping from 3-D images with Markov Random Fields

Boularias, A., Kroemer, O., Peters, J.

In pages: 1548-1553 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Learning to grasp novel objects is an essential skill for robots operating in unstructured environments. We therefore propose a probabilistic approach for learning to grasp. In particular, we learn a function that predicts the success probability of grasps performed on surface points of a given object. Our approach is based on Markov Random Fields (MRF), and motivated by the fact that points that are geometrically close to each other tend to have similar grasp success probabilities. The MRF approach is successfully tested in simulation, and on a real robot using 3-D scans of various types of objects. The empirical results show a significant improvement over methods that do not utilize the smoothness assumption and classify each point separately from the others.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Neurofeedback of Fronto-Parietal Gamma-Oscillations

Grosse-Wentrup, M.

In pages: 172-175, (Editors: Müller-Putz, G.R. , R. Scherer, M. Billinger, A. Kreilinger, V. Kaiser, C. Neuper), Verlag der Technischen Universität Graz, Graz, Austria, 5th International Brain-Computer Interface Conference (BCI), September 2011 (inproceedings)

Abstract
In recent work, we have provided evidence that fronto-parietal γ-range oscillations are a cause of within-subject performance variations in brain-computer interfaces (BCIs) based on motor-imagery. Here, we explore the feasibility of using neurofeedback of fronto-parietal γ-power to induce a mental state that is beneficial for BCI-performance. We provide empirical evidence based on two healthy subjects that intentional attenuation of fronto-parietal γ-power results in an enhanced resting-state sensorimotor-rhythm (SMR). As a large resting-state amplitude of the SMR has been shown to correlate with good BCI-performance, our approach may provide a means to reduce performance variations in BCIs.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning inverse kinematics with structured prediction

Bocsi, B., Nguyen-Tuong, D., Csato, L., Schölkopf, B., Peters, J.

In pages: 698-703 , (Editors: NM Amato), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Learning inverse kinematics of robots with redundant degrees of freedom (DoF) is a difficult problem in robot learning. The difficulty lies in the non-uniqueness of the inverse kinematics function. Existing methods tackle non-uniqueness by segmenting the configuration space and building a global solution from local experts. The usage of local experts implies the definition of an oracle, which governs the global consistency of the local models; the definition of this oracle is difficult. We propose an algorithm suitable to learn the inverse kinematics function in a single global model despite its multivalued nature. Inverse kinematics is approximated from examples using structured output learning methods. Unlike most of the existing methods, which estimate inverse kinematics on velocity level, we address the learning of the direct function on position level. This problem is a significantly harder. To support the proposed method, we conducted real world experiments on a tracking control task and tested our algorithms on these models.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic foreground-background refocusing

Loktyushin, A., Harmeling, S.

In pages: 3445-3448, (Editors: Macq, B. , P. Schelkens), IEEE, Piscataway, NJ, USA, 18th IEEE International Conference on Image Processing (ICIP), September 2011 (inproceedings)

Abstract
A challenging problem in image restoration is to recover an image with a blurry foreground. Such images can easily occur with modern cameras, when the auto-focus aims mistakenly at the background (which will appear sharp) instead of the foreground, where usually the object of interest is. In this paper we propose an automatic procedure that (i) estimates the amount of out-of-focus blur, (ii) segments the image into foreground and background incorporating clues from the blurriness, (iii) recovers the sharp foreground, and finally (iv) blurs the background to refocus the scene. On several real photographs with blurry foreground and sharp background, we demonstrate the effectiveness and limitations of our method.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Gravitational Lensing Accuracy Testing 2010 (GREAT10) Challenge Handbook

Kitching, T., Amara, A., Gill, M., Harmeling, S., Heymans, C., Massey, R., Rowe, B., Schrabback, T., Voigt, L., Balan, S., Bernstein, G., Bethge, M., Bridle, S., Courbin, F., Gentile, M., Heavens, A., Hirsch, M., Hosseini, R., Kiessling, A., Kirk, D., Kuijken, K., Mandelbaum, R., Moghaddam, B., Nurbaeva, G., Paulin-Henriksson, S., Rassat, A., Rhodes, J., Schölkopf, B., Shawe-Taylor, J., Shmakova, M., Taylor, A., Velander, M., van Waerbeke, L., Witherick, D., Wittman, D.

Annals of Applied Statistics, 5(3):2231-2263, September 2011 (article)

Abstract
GRavitational lEnsing Accuracy Testing 2010 (GREAT10) is a public image analysis challenge aimed at the development of algorithms to analyze astronomical images. Specifically, the challenge is to measure varying image distortions in the presence of a variable convolution kernel, pixelization and noise. This is the second in a series of challenges set to the astronomy, computer science and statistics communities, providing a structured environment in which methods can be improved and tested in preparation for planned astronomical surveys. GREAT10 extends upon previous work by introducing variable fields into the challenge. The “Galaxy Challenge” involves the precise measurement of galaxy shape distortions, quantified locally by two parameters called shear, in the presence of a known convolution kernel. Crucially, the convolution kernel and the simulated gravitational lensing shape distortion both now vary as a function of position within the images, as is the case for real data. In addition, we introduce the “Star Challenge” that concerns the reconstruction of a variable convolution kernel, similar to that in a typical astronomical observation. This document details the GREAT10 Challenge for potential participants. Continually updated information is also available from www.greatchallenges.info.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Reinforcement Learning to adjust Robot Movements to New Situations

Kober, J., Oztop, E., Peters, J.

In Robotics: Science and Systems VI, pages: 33-40, (Editors: Matsuoka, Y. , H. F. Durrant-Whyte, J. Neira), MIT Press, Cambridge, MA, USA, 2010 Robotics: Science and Systems Conference (RSS), September 2011 (inproceedings)

Abstract
Many complex robot motor skills can be represented using elementary movements, and there exist efficient techniques for learning parametrized motor plans using demonstrations and self-improvement. However, in many cases, the robot currently needs to learn a new elementary movement even if a parametrized motor plan exists that covers a similar, related situation. Clearly, a method is needed that modulates the elementary movement through the meta-parameters of its representation. In this paper, we show how to learn such mappings from circumstances to meta-parameters using reinforcement learning.We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. We compare this algorithm to several previous methods on a toy example and show that it performs well in comparison to standard algorithms. Subsequently, we show two robot applications of the presented setup; i.e., the generalization of throwing movements in darts, and of hitting movements in table tennis. We show that both tasks can be learned successfully using simulated and real robots.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Simultaneous EEG Recordings with Dry and Wet Electrodes in Motor-Imagery

Saab, J., Battes, B., Grosse-Wentrup, M.

In pages: 312-315, (Editors: Müller-Putz, G.R. , R. Scherer, M. Billinger, A. Kreilinger, V. Kaiser, C. Neuper), Verlag der Technischen Universität Graz, Graz, Austria, 5th International Brain-Computer Interface Conference (BCI), September 2011 (inproceedings)

Abstract
Robust dry EEG electrodes are arguably the key to making EEG Brain-Computer Interfaces (BCIs) a practical technology. Existing studies on dry EEG electrodes can be characterized by the recording method (stand-alone dry electrodes or simultaneous recording with wet electrodes), the dry electrode technology (e.g. active or passive), the paradigm used for testing (e.g. event-related potentials), and the measure of performance (e.g. comparing dry and wet electrode frequency spectra). In this study, an active-dry electrode prototype is tested, during a motor-imagery task, with EEG-BCI in mind. It is used simultaneously with wet electrodes and assessed using classification accuracy. Our results indicate that the two types of electrodes are comparable in their performance but there are improvements to be made, particularly in finding ways to reduce motion-related artifacts.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning task-space tracking control with kernels

Nguyen-Tuong, D., Peters, J.

In pages: 704-709 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Task-space tracking control is essential for robot manipulation. In practice, task-space control of redundant robot systems is known to be susceptive to modeling errors. Here, data driven learning methods may present an interesting alternative approach. However, learning models for task-space tracking control from sampled data is an ill-posed problem. In particular, the same input data point can yield many different output values which can form a non-convex solution space. Because the problem is ill-posed, models cannot be learned from such data using common regression methods. While learning of task-space control mappings is globally ill-posed, it has been shown in recent work that it is locally a well-defined problem. In this paper, we use this insight to formulate a local kernel-based learning approach for online model learning for taskspace tracking control. For evaluations, we show in simulation the ability of the method for online model learning for task-space tracking control of redundant robots.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic particle picking using diffusion filtering and random forest classification

Joubert, P., Nickell, S., Beck, F., Habeck, M., Hirsch, M., Schölkopf, B.

In pages: 6, International Workshop on Microscopic Image Analysis with Application in Biology (MIAAB), September 2011 (inproceedings)

Abstract
An automatic particle picking algorithm for processing electron micrographs of a large molecular complex, the 26S proteasome, is described. The algorithm makes use of a coherence enhancing diffusion filter to denoise the data, and a random forest classifier for removing false positives. It does not make use of a 3D reference model, but uses a training set of manually picked particles instead. False positive and false negative rates of around 25% to 30% are achieved on a testing set. The algorithm was developed for a specific particle, but contains steps that should be useful for developing automatic picking algorithms for other particles.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Active Versus Semi-supervised Learning Paradigm for the Classification of Remote Sensing Images

Persello, C., Bruzzone, L.

In pages: 1-15, (Editors: Bruzzone, L.), SPIE, Bellingham, WA, USA, Image and Signal Processing for Remote Sensing XVII, September 2011 (inproceedings)

Abstract
This paper presents a comparative study in order to analyze active learning (AL) and semi-supervised learning (SSL) for the classification of remote sensing (RS) images. The two learning paradigms are analyzed both from the theoretical and experimental point of view. The aim of this work is to identify the advantages and disadvantages of AL and SSL methods, and to point out the boundary conditions on the applicability of these methods with respect to both the number of available labeled samples and the reliability of classification results. In our experimental analysis, AL and SSL techniques have been applied to the classification of both synthetic and real RS data, defining different classification problems starting from different initial training sets and considering different distributions of the classes. This analysis allowed us to derive important conclusion about the use of these classification approaches and to obtain insight about which one of the two approaches is more appropriate according to the specific classification problem, the available initial training set and the available budget for the acquisition of new labeled samples.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
MRI-Based Attenuation Correction for Whole-Body PET/MRI: Quantitative Evaluation of Segmentation- and Atlas-Based Methods

Hofmann, M., Bezrukov, I., Mantlik, F., Aschoff, P., Steinke, F., Beyer, T., Pichler, B., Schölkopf, B.

Journal of Nuclear Medicine, 52(9):1392-1399, September 2011 (article)

Abstract
PET/MRI is an emerging dual-modality imaging technology that requires new approaches to PET attenuation correction (AC). We assessed 2 algorithms for whole-body MRI-based AC (MRAC): a basic MR image segmentation algorithm and a method based on atlas registration and pattern recognition (AT&PR). METHODS: Eleven patients each underwent a whole-body PET/CT study and a separate multibed whole-body MRI study. The MR image segmentation algorithm uses a combination of image thresholds, Dixon fat-water segmentation, and component analysis to detect the lungs. MR images are segmented into 5 tissue classes (not including bone), and each class is assigned a default linear attenuation value. The AT&PR algorithm uses a database of previously aligned pairs of MRI/CT image volumes. For each patient, these pairs are registered to the patient MRI volume, and machine-learning techniques are used to predict attenuation values on a continuous scale. MRAC methods are compared via the quantitative analysis of AC PET images using volumes of interest in normal organs and on lesions. We assume the PET/CT values after CT-based AC to be the reference standard. RESULTS: In regions of normal physiologic uptake, the average error of the mean standardized uptake value was 14.1% ± 10.2% and 7.7% ± 8.4% for the segmentation and the AT&PR methods, respectively. Lesion-based errors were 7.5% ± 7.9% for the segmentation method and 5.7% ± 4.7% for the AT&PR method. CONCLUSION: The MRAC method using AT&PR provided better overall PET quantification accuracy than the basic MR image segmentation approach. This better quantification was due to the significantly reduced volume of errors made regarding volumes of interest within or near bones and the slightly reduced volume of errors made regarding areas outside the lungs.

ei

Web DOI [BibTex]


no image
Learning elementary movements jointly with a higher level task

Kober, J., Peters, J.

In pages: 338-343 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Many motor skills consist of many lower level elementary movements that need to be sequenced in order to achieve a task. In order to learn such a task, both the primitive movements as well as the higher-level strategy need to be acquired at the same time. In contrast, most learning approaches focus either on learning to combine a fixed set of options or to learn just single options. In this paper, we discuss a new approach that allows improving the performance of lower level actions while pursuing a higher level task. The presented approach is applicable to learning a wider range motor skills, but in this paper, we employ it for learning games where the player wants to improve his performance at the individual actions of the game while still performing well at the strategy level game. We propose to learn the lower level actions using Cost-regularized Kernel Regression and the higher level actions using a form of Policy Iteration. The two approaches are coupled by their transition probabilities. We evaluate the approach on a side-stall-style throwing game both in simulation and with a real BioRob.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-parametric Tumor Characterization and Therapy Monitoring using Simultaneous PET/MRI: initial results for Lung Cancer and GvHD

Sauter, A., Schmidt, H., Gueckel, B., Brendle, C., Bezrukov, I., Mantlik, F., Kolb, A., Mueller, M., Reimold, M., Federmann, B., Hetzel, J., Claussen, C., Pfannenberg, C., Horger, M., Pichler, B., Schwenzer, N.

(T110), 2011 World Molecular Imaging Congress (WMIC), September 2011 (talk)

Abstract
Hybrid imaging modalities such as [18F]FDG-PET/CT are superior in staging of e.g. lung cancer disease compared with stand-alone modalities. Clinical PET/MRI systems are about to enter the field of hybrid imaging and offer potential advantages. One added value could be a deeper insight into the tumor metabolism and tumorigenesis due to the combination of PET and dedicated MR methods such as MRS and DWI. Additionally, therapy monitoring of diffucult to diagnose disease such as chronic sclerodermic GvHD (csGvHD) can potentially be improved by this combination. We have applied PET/MRI in 3 patients with lung cancer and 4 patients with csGvHD before and during therapy. All 3 patients had lung cancer confirmed by histology (2 adenocarcinoma, 1 carcinoid). First, a [18F]FDG-PET/CT was performed with the following parameters: injected dose 351.7±25.1 MBq, uptake time 59.0±2.6 min, 3 min/bed. Subsequently, patients were brought to the PET/MRI imaging facility. The whole-body PET/MRI Biograph mMR system comprises 56 detector cassettes with a 59.4 cm transaxial and 25.8 cm axial FoV. The MRI is a modified Verio system with a magnet bore of 60 cm. The following parameters for PET acquisition were applied: uptake time 121.3±2.3 min, 3 bed positions, 6 min/bed. T1w, T2w, and DWI MR images were recorded simultaneously for each bed. Acquired PET data were reconstructed with an iterative 3D OSEM algorithm using 3 iterations and 21 subsets, Gaussian filter of 3 mm. The 4 patients with GvHD were brought to the brainPET/MRI imaging facility 2:10h-2:28h after tracer injection. A 9 min brainPET-acquisition with simultaneous MRI of the lower extremities was accomplished. MRI examination included T1-weighted (pre and post gadolinium) and T2-weighted sequences. Attenuation correction was calculated based on manual bone segmentation and thresholds for soft tissue, fat and air. Soleus muscle (m), crural fascia (f1) and posterior crural intermuscular septum fascia (f2) were surrounded with ROIs based on the pre-treatment T1-weighted images and coregistered using IRW (Siemens). Fascia-to-muscle ratios for PET (f/m), T1 contrast uptake (T1_post-contrast_f-pre-contrast_f/post-contrast_m-pre-contrast_m) and T2 (T2_f/m) were calculated. Both patients with adenocarcinoma show a lower ADC value compared with the carcinoid patient suggesting a higher cellularity. This is also reflected in FDG-PET with higher SUV values. Our initial results reveal that PET/MRI can provide complementary information for a profound tumor characterization and therapy monitoring. The high soft tissue contrast provided by MRI is valuable for the assessment of the fascial inflammation. While in the first patient FDG and contrast uptake as well as edema, represented by T2 signals, decreased with ongoing therapy, all parameters remained comparatively stable in the second patient. Contrary to expectations, an increase in FDG uptake of patient 3 and 4 was accompanied by an increase of the T2 signals, but a decrease in contrast uptake. These initial results suggest that PET/MRI provides complementary information of the complex disease mechanisms in fibrosing disorders.

ei

Web [BibTex]

Web [BibTex]


no image
Adaptive nonparametric detection in cryo-electron microscopy

Langovoy, M., Habeck, M., Schölkopf, B.

In Proceedings of the 58th World Statistics Congress, pages: 4456-4461, ISI, August 2011 (inproceedings)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Semi-supervised kernel canonical correlation analysis with application to human fMRI

Blaschko, M., Shelton, J., Bartels, A., Lampert, C., Gretton, A.

Pattern Recognition Letters, 32(11):1572-1583 , August 2011 (article)

Abstract
Kernel canonical correlation analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, KCCA learns representations that are more closely tied to the underlying process that generates the data and can ignore high-variance noise directions. However, for data where acquisition in one or more modalities is expensive or otherwise limited, KCCA may suffer from small sample effects. We propose to use semi-supervised Laplacian regularization to utilize data that are present in only one modality. This approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned. fMRI data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single and multi-variate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Balancing Safety and Exploitability in Opponent Modeling

Wang, Z., Boularias, A., Mülling, K., Peters, J.

In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2011), pages: 1515-1520, (Editors: Burgard, W. and Roth, D.), AAAI Press, Menlo Park, CA, USA, August 2011 (inproceedings)

Abstract
Opponent modeling is a critical mechanism in repeated games. It allows a player to adapt its strategy in order to better respond to the presumed preferences of his opponents. We introduce a new modeling technique that adaptively balances exploitability and risk reduction. An opponent’s strategy is modeled with a set of possible strategies that contain the actual strategy with a high probability. The algorithm is safe as the expected payoff is above the minimax payoff with a high probability, and can exploit the opponents’ preferences when sufficient observations have been obtained. We apply them to normal-form games and stochastic games with a finite number of stages. The performance of the proposed approach is first demonstrated on repeated rock-paper-scissors games. Subsequently, the approach is evaluated in a human-robot table-tennis setting where the robot player learns to prepare to return a served ball. By modeling the human players, the robot chooses a forehand, backhand or middle preparation pose before they serve. The learned strategies can exploit the opponent’s preferences, leading to a higher rate of successful returns.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Detecting emergent processes in cellular automata with excess information

Balduzzi, D.

In Advances in Artificial Life: ECAL 2011, pages: 55-62, (Editors: Lenaerts, T. , M. Giacobini, H. Bersini, P. Bourgine, M. Dorigo, R. Doursat), MIT Press, Cambridge, MA, USA, Eleventh European Conference on the Synthesis and Simulation of Living Systems, August 2011 (inproceedings)

Abstract
Many natural processes occur over characteristic spatial and temporal scales. This paper presents tools for (i) flexibly and scalably coarse-graining cellular automata and (ii) identifying which coarse-grainings express an automaton’s dynamics well, and which express its dynamics badly. We apply the tools to investigate a range of examples in Conway’s Game of Life and Hopfield networks and demonstrate that they capture some basic intuitions about emergent processes. Finally, we formalize the notion that a process is emergent if it is better expressed at a coarser granularity.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Statistical Image Analysis and Percolation Theory

Langovoy, M., Habeck, M., Schölkopf, B.

2011 Joint Statistical Meetings (JSM), August 2011 (talk)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

Web [BibTex]

Web [BibTex]


no image
Spatial statistics, image analysis and percolation theory

Langovoy, M., Habeck, M., Schölkopf, B.

In pages: 11, American Statistical Association, Alexandria, VA, USA, 2011 Joint Statistical Meetings (JSM), August 2011 (inproceedings)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as a multiple hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

PDF [BibTex]

PDF [BibTex]


no image
Two-locus association mapping in subquadratic time

Achlioptas, P., Schölkopf, B., Borgwardt, K.

In pages: 726-734, (Editors: C Apté and J Ghosh and P Smyth), ACM Press, New York, NY, USA, 17th ACM SIGKKD Conference on Knowledge Discovery and Data Mining (KDD) , August 2011 (inproceedings)

Abstract
Genome-wide association studies (GWAS) have not been able to discover strong associations between many complex human diseases and single genetic loci. Mapping these phenotypes to pairs of genetic loci is hindered by the huge number of candidates leading to enormous computational and statistical problems. In GWAS on single nucleotide polymorphisms (SNPs), one has to consider in the order of 1010 to 1014 pairs, which is infeasible in practice. In this article, we give the first algorithm for 2-locus genome-wide association studies that is subquadratic in the number, n, of SNPs. The running time of our algorithm is data-dependent, but large experiments over real genomic data suggest that it scales empirically as n3/2. As a result, our algorithm can easily cope with n ~ 107, i.e., it can efficiently search all pairs of SNPs in the human genome.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Multi-subject learning for common spatial patterns in motor-imagery BCI

Devlaminck, D., Wyns, B., Grosse-Wentrup, M., Otte, G., Santens, P.

Computational Intelligence and Neuroscience, 2011(217987):1-9, August 2011 (article)

Abstract
Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Bayesian Time Series Models

Barber, D., Cemgil, A., Chiappa, S.

pages: 432, Cambridge University Press, Cambridge, UK, August 2011 (book)

ei

[BibTex]

[BibTex]


no image
A Novel Active Learning Strategy for Domain Adaptation in the Classification of Remote Sensing Images

Persello, C., Bruzzone, L.

In pages: 3720-3723 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2011 (inproceedings)

Abstract
We present a novel technique for addressing domain adaptation problems in the classification of remote sensing images with active learning. Domain adaptation is the important problem of adapting a supervised classifier trained on a given image (source domain) to the classification of another similar (but not identical) image (target domain) acquired on a different area, or on the same area at a different time. The main idea of the proposed approach is to iteratively labeling and adding to the training set the minimum number of the most informative samples from target domain, while removing the source-domain samples that does not fit with the distributions of the classes in the target domain. In this way, the classification system exploits already available information, i.e., the labeled samples of source domain, in order to minimize the number of target domain samples to be labeled, thus reducing the cost associated to the definition of the training set for the classification of the target domain. Experimental results obtained in the classification of a hyperspectral image confirm the effectiveness of the proposed technique.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Reinforcement Learning to adjust Robot Movements to New Situations

Kober, J., Oztop, E., Peters, J.

In pages: 2650-2655, (Editors: Walsh, T.), AAAI Press, Menlo Park, CA, USA, Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), July 2011 (inproceedings)

Abstract
Many complex robot motor skills can be represented using elementary movements, and there exist efficient techniques for learning parametrized motor plans using demonstrations and self-improvement. However with current techniques, in many cases, the robot currently needs to learn a new elementary movement even if a parametrized motor plan exists that covers a related situation. A method is needed that modulates the elementary movement through the meta-parameters of its representation. In this paper, we describe how to learn such mappings from circumstances to meta-parameters using reinforcement learning. In particular we use a kernelized version of the reward-weighted regression. We show two robot applications of the presented setup in robotic domains; the generalization of throwing movements in darts, and of hitting movements in table tennis. We demonstrate that both tasks can be learned successfully using simulated and real robots.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Online submodular minimization for combinatorial structures

Jegelka, S., Bilmes, J.

In pages: 345-352, (Editors: Getoor, L. , T. Scheffer), International Machine Learning Society, Madison, WI, USA, 28th International Conference on Machine Learning (ICML), July 2011 (inproceedings)

Abstract
Most results for online decision problems with structured concepts, such as trees or cuts, assume linear costs. In many settings, however, nonlinear costs are more realistic. Owing to their non-separability, these lead to much harder optimization problems. Going beyond linearity, we address online approximation algorithms for structured concepts that allow the cost to be submodular, i.e., nonseparable. In particular, we show regret bounds for three Hannan-consistent strategies that capture different settings. Our results also tighten a regret bound for unconstrained online submodular minimization.

ei

PDF PDF Web [BibTex]

PDF PDF Web [BibTex]


no image
PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off

Seldin, Y., Cesa-Bianchi, N., Laviolette, F., Auer, P., Shawe-Taylor, J., Peters, J.

In pages: 1-8, ICML Workshop on Online Trading of Exploration and Exploitation 2, July 2011 (inproceedings)

Abstract
We develop a coherent framework for integrative simultaneous analysis of the exploration-exploitation and model order selection trade-offs. We improve over our preceding results on the same subject (Seldin et al., 2011) by combining PAC-Bayesian analysis with Bernstein-type inequality for martingales. Such a combination is also of independent interest for studies of multiple simultaneously evolving martingales.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
ccSVM: correcting Support Vector Machines for confounding factors in biological data classification

Li, L., Rakitsch, B., Borgwardt, K.

Bioinformatics, 27(13: ISMB/ECCB 2011):i342-i348, July 2011 (article)

Abstract
Motivation: Classifying biological data into different groups is a central task of bioinformatics: for instance, to predict the function of a gene or protein, the disease state of a patient or the phenotype of an individual based on its genotype. Support Vector Machines are a wide spread approach for classifying biological data, due to their high accuracy, their ability to deal with structured data such as strings, and the ease to integrate various types of data. However, it is unclear how to correct for confounding factors such as population structure, age or gender or experimental conditions in Support Vector Machine classification. Results: In this article, we present a Support Vector Machine classifier that can correct the prediction for observed confounding factors. This is achieved by minimizing the statistical dependence between the classifier and the confounding factors. We prove that this formulation can be transformed into a standard Support Vector Machine with rescaled input data. In our experiments, our confounder correcting SVM (ccSVM) improves tumor diagnosis based on samples from different labs, tuberculosis diagnosis in patients of varying age, ethnicity and gender, and phenotype prediction in the presence of population structure and outperforms state-of-the-art methods in terms of prediction accuracy.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Detecting low-complexity unobserved causes

Janzing, D., Sgouritsa, E., Stegle, O., Peters, J., Schölkopf, B.

In pages: 383-391, (Editors: FG Cozman and A Pfeffer), AUAI Press, Corvallis, OR, USA, 27th Conference on Uncertainty in Artificial Intelligence (UAI), July 2011 (inproceedings)

Abstract
We describe a method that infers whether statistical dependences between two observed variables X and Y are due to a \direct" causal link or only due to a connecting causal path that contains an unobserved variable of low complexity, e.g., a binary variable. This problem is motivated by statistical genetics. Given a genetic marker that is correlated with a phenotype of interest, we want to detect whether this marker is causal or it only correlates with a causal one. Our method is based on the analysis of the location of the conditional distributions P(Y jx) in the simplex of all distributions of Y . We report encouraging results on semi-empirical data.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Support Vector Machines as Probabilistic Models

Franc, V., Zien, A., Schölkopf, B.

In Proceedings of the 28th International Conference on Machine Learning, pages: 665-672, (Editors: L Getoor and T Scheffer), International Machine Learning Society, Madison, WI, USA, ICML, July 2011 (inproceedings)

Abstract
We show how the SVM can be viewed as a maximum likelihood estimate of a class of probabilistic models. This model class can be viewed as a reparametrization of the SVM in a similar vein to the v-SVM reparametrizing the classical (C-)SVM. It is not discriminative, but has a non-uniform marginal. We illustrate the benefits of this new view by rederiving and re-investigating two established SVM-related algorithms.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Policy Search for Motor Primitives in Robotics

Kober, J., Peters, J.

Machine Learning, 84(1-2):171-203, July 2011 (article)

Abstract
Many motor skills in humanoid robotics can be learned using parametrized motor primitives. While successful applications to date have been achieved with imitation learning, most of the interesting motor learning problems are high-dimensional reinforcement learning problems. These problems are often beyond the reach of current reinforcement learning methods. In this paper, we study parametrized policy search methods and apply these to benchmark problems of motor primitive learning in robotics. We show that many well-known parametrized policy search methods can be derived from a general, common framework. This framework yields both policy gradient methods and expectation-maximization (EM) inspired algorithms. We introduce a novel EM-inspired algorithm for policy learning that is particularly well-suited for dynamical system motor primitives. We compare this algorithm, both in simulation and on a real robot, to several well-known parametrized policy search methods such as episodic REINFORCE, ‘Vanilla’ Policy Gradients with optimal baselines, episodic Natural Actor Critic, and episodic Reward-Weighted Regression. We show that the proposed method out-performs them on an empirical benchmark of learning dynamical system motor primitives both in simulation and on a real robot. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task on a real Barrett WAM™ robot arm.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Epistasis detection on quantitative phenotypes by exhaustive enumeration using GPUs

Kam-Thong, T., Pütz, B., Karbalai, N., Müller-Myhsok, B., Borgwardt, K.

Bioinformatics, 27(13: ISMB/ECCB 2011):i214-i221, July 2011 (article)

Abstract
Motivation: In recent years, numerous genome-wide association studies have been conducted to identify genetic makeup that explains phenotypic differences observed in human population. Analytical tests on single loci are readily available and embedded in common genome analysis software toolset. The search for significant epistasis (gene–gene interactions) still poses as a computational challenge for modern day computing systems, due to the large number of hypotheses that have to be tested. Results: In this article, we present an approach to epistasis detection by exhaustive testing of all possible SNP pairs. The search strategy based on the Hilbert–Schmidt Independence Criterion can help delineate various forms of statistical dependence between the genetic markers and the phenotype. The actual implementation of this search is done on the highly parallelized architecture available on graphics processing units rendering the completion of the full search feasible within a day.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Empirical Inference

Schölkopf, B.

International Journal of Materials Research, 2011(7):809-814, July 2011 (article)

Abstract
Empirical Inference is the process of drawing conclusions from observational data. For instance, the data can be measurements from an experiment, which are used by a researcher to infer a scientific law. Another kind of empirical inference is performed by living beings, continuously recording data from their environment and carrying out appropriate actions. Do these problems have anything in common, and are there underlying principles governing the extraction of regularities from data? What characterizes hard inference problems, and how can we solve them? Such questions are studied by a community of scientists from various fields, engaged in machine learning research. This short paper, which is based on the author’s lecture to the scientific council of the Max Planck Society in February 2010, will attempt to describe some of the main ideas and problems of machine learning. It will provide illustrative examples of real world machine learning applications, including the use of machine learning towards the design of intelligent systems.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Identifiability of causal graphs using functional models

Peters, J., Mooij, J., Janzing, D., Schölkopf, B.

In pages: 589-598, (Editors: FG Cozman and A Pfeffer), AUAI Press, Corvallis, OR, USA, 27th Conference on Uncertainty in Artificial Intelligence (UAI), July 2011 (inproceedings)

Abstract
This work addresses the following question: Under what assumptions on the data generating process can one infer the causal graph from the joint distribution? The approach taken by conditional independencebased causal discovery methods is based on two assumptions: the Markov condition and faithfulness. It has been shown that under these assumptions the causal graph can be identified up to Markov equivalence (some arrows remain undirected) using methods like the PC algorithm. In this work we propose an alternative by Identifiable Functional Model Classes (IFMOCs). As our main theorem we prove that if the data generating process belongs to an IFMOC, one can identify the complete causal graph. To the best of our knowledge this is the first identifiability result of this kind that is not limited to linear functional relationships. We discuss how the IFMOC assumption and the Markov and faithfulness assumptions relate to each other and explain why we believe that the IFMOC assumption can be tested more easily on given data. We further provide a practical algorithm that recovers the causal graph from finitely many data; experiments on simulated data support the theoretical fndings.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Pruning nearest neighbor cluster trees

Kpotufe, S., von Luxburg, U.

In pages: 225-232, (Editors: Getoor, L. , T. Scheffer), International Machine Learning Society, Madison, WI, USA, 28th International Conference on Machine Learning (ICML), July 2011 (inproceedings)

Abstract
Nearest neighbor ($k$-NN) graphs are widely used in machine learning and data mining applications, and our aim is to better understand what they reveal about the cluster structure of the unknown underlying distribution of points. Moreover, is it possible to identify spurious structures that might arise due to sampling variability? Our first contribution is a statistical analysis that reveals how certain subgraphs of a $k$-NN graph form a consistent estimator of the cluster tree of the underlying distribution of points. Our second and perhaps most important contribution is the following finite sample guarantee. We carefully work out the tradeoff between aggressive and conservative pruning and are able to guarantee the removal of all spurious cluster structures while at the same time guaranteeing the recovery of salient clusters. This is the first such finite sample result in the context of clustering.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Testing whether linear equations are causal: A free probability theory approach

Zscheischler, J., Janzing, D., Zhang, K.

In pages: 839-847, (Editors: Cozman, F.G. , A. Pfeffer), AUAI Press, Corvallis, OR, USA, 27th Conference on Uncertainty in Artificial Intelligence (UAI), July 2011 (inproceedings)

Abstract
We propose a method that infers whether linear relations between two high-dimensional variables X and Y are due to a causal influence from X to Y or from Y to X. The earlier proposed so-called Trace Method is extended to the regime where the dimension of the observed variables exceeds the sample size. Based on previous work, we postulate conditions that characterize a causal relation between X and Y . Moreover, we describe a statistical test and argue that both causal directions are typically rejected if there is a common cause. A full theoretical analysis is presented for the deterministic case but our approach seems to be valid for the noisy case, too, for which we additionally present an approach based on a sparsity constraint. The discussed method yields promising results for both simulated and real world data.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
On the information-theoretic structure of distributed measurements

Balduzzi, D.

In pages: 1-15, Elsevier Science, Amsterdam, Netherlands, 7th International Workshop on Developments of Computational Models (DCM), July 2011 (inproceedings)

Abstract
The internal structure of a measuring device, which depends on what its components are and how they are organized, determines how it categorizes its inputs. This paper presents a geometric approach to studying the internal structure of measurements performed by distributed systems such as probabilistic cellular automata. It constructs the quale, a family of sections of a suitably defined presheaf, whose elements correspond to the measurements performed by all subsystems of a distributed system. Using the quale we quantify (i) the information generated by a measurement; (ii) the extent to which a measurement is context-dependent; and (iii) whether a measurement is decomposable into independent submeasurements, which turns out to be equivalent to context-dependence. Finally, we show that only indecomposable measurements are more informative than the sum of their submeasurements.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Online Multi-frame Blind Deconvolution with Super-resolution and Saturation Correction

Hirsch, M., Harmeling, S., Sra, S., Schölkopf, B.

Astronomy & Astrophysics, 531(A9):11, July 2011 (article)

Abstract
Astronomical images taken by ground-based telescopes suffer degradation due to atmospheric turbulence. This degradation can be tackled by costly hardware-based approaches such as adaptive optics, or by sophisticated software-based methods such as lucky imaging, speckle imaging, or multi-frame deconvolution. Software-based methods process a sequence of images to reconstruct a deblurred high-quality image. However, existing approaches are limited in one or several aspects: (i) they process all images in batch mode, which for thousands of images is prohibitive; (ii) they do not reconstruct a super-resolved image, even though an image sequence often contains enough information; (iii) they are unable to deal with saturated pixels; and (iv) they are usually non-blind, i.e., they assume the blur kernels to be known. In this paper we present a new method for multi-frame deconvolution called online blind deconvolution (OBD) that overcomes all these limitations simultaneously. Encouraging results on simulated and real astronomical images demonstrate that OBD yields deblurred images of comparable and often better quality than existing approaches.

ei

PDF DOI [BibTex]


no image
Towards Brain-Robot Interfaces in Stroke Rehabilitation

Gomez Rodriguez, M., Grosse-Wentrup, M., Hill, J., Gharabaghi, A., Schölkopf, B., Peters, J.

In pages: 6, IEEE, Piscataway, NJ, USA, 12th International Conference on Rehabilitation Robotics (ICORR), July 2011 (inproceedings)

Abstract
A neurorehabilitation approach that combines robot-assisted active physical therapy and Brain-Computer Interfaces (BCIs) may provide an additional mileage with respect to traditional rehabilitation methods for patients with severe motor impairment due to cerebrovascular brain damage (e.g., stroke) and other neurological conditions. In this paper, we describe the design and modes of operation of a robot-based rehabilitation framework that enables artificial support of the sensorimotor feedback loop. The aim is to increase cortical plasticity by means of Hebbian-type learning rules. A BCI-based shared-control strategy is used to drive a Barret WAM 7-degree-of-freedom arm that guides a subject's arm. Experimental validation of our setup is carried out both with healthy subjects and stroke patients. We review the empirical results which we have obtained to date, and argue that they support the feasibility of future rehabilitative treatments employing this novel approach.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Uncovering the Temporal Dynamics of Diffusion Networks

Gomez Rodriguez, M., Balduzzi, D., Schölkopf, B.

In Proceedings of the 28th International Conference on Machine Learning, pages: 561-568, (Editors: L. Getoor and T. Scheffer), Omnipress, Madison, WI, USA, ICML, July 2011 (inproceedings)

Abstract
Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Risk-Based Generalizations of f-divergences

García-García, D., von Luxburg, U., Santos-Rodríguez, R.

In pages: 417-424, (Editors: Getoor, L. , T. Scheffer), International Machine Learning Society, Madison, WI, USA, 28th International Conference on Machine Learning (ICML), July 2011 (inproceedings)

Abstract
We derive a generalized notion of f-divergences, called (f,l)-divergences. We show that this generalization enjoys many of the nice properties of f-divergences, although it is a richer family. It also provides alternative definitions of standard divergences in terms of surrogate risks. As a first practical application of this theory, we derive a new estimator for the Kulback-Leibler divergence that we use for clustering sets of vectors.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel-based Conditional Independence Test and Application in Causal Discovery

Zhang, K., Peters, J., Janzing, D., Schölkopf, B.

In pages: 804-813, (Editors: FG Cozman and A Pfeffer), AUAI Press, Corvallis, OR, USA, 27th Conference on Uncertainty in Artificial Intelligence (UAI), July 2011 (inproceedings)

Abstract
Conditional independence testing is an important problem, especially in Bayesian network learning and causal discovery. Due to the curse of dimensionality, testing for conditional independence of continuous variables is particularly challenging. We propose a Kernel-based Conditional Independence test (KCI-test), by constructing an appropriate test statistic and deriving its asymptotic distribution under the null hypothesis of conditional independence. The proposed method is computationally efficient and easy to implement. Experimental results show that it outperforms other methods, especially when the conditioning set is large or the sample size is not very large, in which case other methods encounter difficulties.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Approximation Bounds for Inference using Cooperative Cut

Jegelka, S., Bilmes, J.

In pages: 577-584, (Editors: Getoor, L. , T. Scheffer), International Machine Learning Society, Madison, WI, USA, 28th International Conference on Machine Learning (ICML), July 2011 (inproceedings)

Abstract
We analyze a family of probability distributions that are characterized by an embedded combinatorial structure. This family includes models having arbitrary treewidth and arbitrary sized factors. Unlike general models with such freedom, where the “most probable explanation” (MPE) problem is inapproximable, the combinatorial structure within our model, in particular the indirect use of submodularity, leads to several MPE algorithms that all have approximation guarantees.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Multi-label cooperative cuts

Jegelka, S., Bilmes, J.

In pages: 1-4, CVPR Workshop on Inference in Graphical Models with Structured Potentials, June 2011 (inproceedings)

Abstract
Recently, a family of global, non-submodular energy functions has been proposed that is expressed as coupling edges in a graph cut. This formulation provides a rich modelling framework and also leads to efficient approximate inference algorithms. So far, the results addressed binary random variables. Here, we extend these results to the multi-label case, and combine edge coupling with move-making algorithms.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Submodularity beyond submodular energies: coupling edges in graph cuts

Jegelka, S., Bilmes, J.

In pages: 1897-1904, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2011 (inproceedings)

Abstract
We propose a new family of non-submodular global energy functions that still use submodularity internally to couple edges in a graph cut. We show it is possible to develop an efficient approximation algorithm that, thanks to the internal submodularity, can use standard graph cuts as a subroutine. We demonstrate the advantages of edge coupling in a natural setting, namely image segmentation. In particular, for finestructured objects and objects with shading variation, our structured edge coupling leads to significant improvements over standard approaches.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Closing the sensorimotor loop: haptic feedback facilitates decoding of motor imagery

Gomez Rodriguez, M., Peters, J., Hill, J., Schölkopf, B., Gharabaghi, A., Grosse-Wentrup, M.

Journal of Neural Engineering, 8(3):1-12, June 2011 (article)

Abstract
The combination of brain–computer interfaces (BCIs) with robot-assisted physical therapy constitutes a promising approach to neurorehabilitation of patients with severe hemiparetic syndromes caused by cerebrovascular brain damage (e.g. stroke) and other neurological conditions. In such a scenario, a key aspect is how to reestablish the disrupted sensorimotor feedback loop. However, to date it is an open question how artificially closing the sensorimotor feedback loop influences the decoding performance of a BCI. In this paper, we answer this issue by studying six healthy subjects and two stroke patients. We present empirical evidence that haptic feedback, provided by a seven degrees of freedom robotic arm, facilitates online decoding of arm movement intention. The results support the feasibility of future rehabilitative treatments based on the combination of robot-assisted physical therapy with BCIs.

ei

PDF PDF DOI [BibTex]