Header logo is


2009


no image
Constructing Sparse Kernel Machines Using Attractors

Lee, D., Jung, K., Lee, J.

IEEE Transactions on Neural Networks, 20(4):721-729, April 2009 (article)

Abstract
In this brief, a novel method that constructs a sparse kernel machine is proposed. The proposed method generates attractors as sparse solutions from a built-in kernel machine via a dynamical system framework. By readjusting the corresponding coefficients and bias terms, a sparse kernel machine that approximates a conventional kernel machine is constructed. The simulation results show that the constructed sparse kernel machine improves the efficiency of testing phase while maintaining comparable test error.

ei

Web DOI [BibTex]

2009


Web DOI [BibTex]


no image
Optimal construction of k-nearest-neighbor graphs for identifying noisy clusters

Maier, M., Hein, M., von Luxburg, U.

Theoretical Computer Science, 410(19):1749-1764, April 2009 (article)

Abstract
We study clustering algorithms based on neighborhood graphs on a random sample of data points. The question we ask is how such a graph should be constructed in order to obtain optimal clustering results. Which type of neighborhood graph should one choose, mutual k-nearest-neighbor or symmetric k-nearest-neighbor? What is the optimal parameter k? In our setting, clusters are defined as connected components of the t-level set of the underlying probability distribution. Clusters are said to be identified in the neighborhood graph if connected components in the graph correspond to the true underlying clusters. Using techniques from random geometric graph theory, we prove bounds on the probability that clusters are identified successfully, both in a noise-free and in a noisy setting. Those bounds lead to several conclusions. First, k has to be chosen surprisingly high (rather of the order n than of the order logn) to maximize the probability of cluster identification. Secondly, the major difference between the mutual and the symmetric k-nearest-neighbor graph occurs when one attempts to detect the most significant cluster only.

ei

PDF PDF DOI [BibTex]


no image
Overlap and refractory effects in a Brain-Computer Interface speller based on the visual P300 Event-Related Potential

Martens, S., Hill, N., Farquhar, J., Schölkopf, B.

Journal of Neural Engineering, 6(2):1-9, April 2009 (article)

Abstract
We reveal the presence of refractory and overlap effects in the event-related potentials in visual P300 speller datasets, and we show their negative impact on the performance of the system. This finding has important implications for how to encode the letters that can be selected for communication. However, we show that such effects are dependent on stimulus parameters: an alternative stimulus type based on apparent motion suffers less from the refractory effects and leads to an improved letter prediction performance.

ei

PDF DOI [BibTex]


no image
Kernel Methods in Computer Vision:Object Localization, Clustering,and Taxonomy Discovery

Blaschko, MB.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, March 2009 (phdthesis)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Nearest Neighbor Clustering: A Baseline Method for Consistent Clustering with Arbitrary Objective Functions

Bubeck, S., von Luxburg, U.

Journal of Machine Learning Research, 10, pages: 657-698, March 2009 (article)

Abstract
Clustering is often formulated as a discrete optimization problem. The objective is to find, among all partitions of the data set, the best one according to some quality measure. However, in the statistical setting where we assume that the finite data set has been sampled from some underlying space, the goal is not to find the best partition of the given sample, but to approximate the true partition of the underlying space. We argue that the discrete optimization approach usually does not achieve this goal, and instead can lead to inconsistency. We construct examples which provably have this behavior. As in the case of supervised learning, the cure is to restrict the size of the function classes under consideration. For appropriate “small” function classes we can prove very general consistency theorems for clustering optimization schemes. As one particular algorithm for clustering with a restricted function space we introduce “nearest neighbor clustering”. Similar to the k-nearest neighbor classifier in supervised learning, this algorithm can be seen as a general baseline algorithm to minimize arbitrary clustering objective functions. We prove that it is statistically consistent for all commonly used clustering objective functions.

ei

PDF Web [BibTex]


no image
Protein Functional Class Prediction With a Combined Graph

Shin, H., Tsuda, K., Schölkopf, B.

Expert Systems with Applications, 36(2):3284-3292, March 2009 (article)

Abstract
In bioinformatics, there exist multiple descriptions of graphs for the same set of genes or proteins. For instance, in yeast systems, graph edges can represent different relationships such as protein–protein interactions, genetic interactions, or co-participation in a protein complex, etc. Relying on similarities between nodes, each graph can be used independently for prediction of protein function. However, since different graphs contain partly independent and partly complementary information about the problem at hand, one can enhance the total information extracted by combining all graphs. In this paper, we propose a method for integrating multiple graphs within a framework of semi-supervised learning. The method alternates between minimizing the objective function with respect to network output and with respect to combining weights. We apply the method to the task of protein functional class prediction in yeast. The proposed method performs significantly better than the same algorithm trained on any singl e graph.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Gaussian Process Dynamic Programming

Deisenroth, M., Rasmussen, C., Peters, J.

Neurocomputing, 72(7-9):1508-1524, March 2009 (article)

Abstract
Reinforcement learning (RL) and optimal control of systems with contin- uous states and actions require approximation techniques in most interesting cases. In this article, we introduce Gaussian process dynamic programming (GPDP), an approximate value-function based RL algorithm. We consider both a classic optimal control problem, where problem-specific prior knowl- edge is available, and a classic RL problem, where only very general priors can be used. For the classic optimal control problem, GPDP models the unknown value functions with Gaussian processes and generalizes dynamic programming to continuous-valued states and actions. For the RL problem, GPDP starts from a given initial state and explores the state space using Bayesian active learning. To design a fast learner, available data has to be used efficiently. Hence, we propose to learn probabilistic models of the a priori unknown transition dynamics and the value functions on the fly. In both cases, we successfully apply the resulting continuous-valued controllers to the under-actuated pendulum swing up and analyze the performances of the suggested algorithms. It turns out that GPDP uses data very efficiently and can be applied to problems, where classic dynamic programming would be cumbersome.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

Hofmann, M., Pichler, B., Schölkopf, B., Beyer, T.

European Journal of Nuclear Medicine and Molecular Imaging, 36(Supplement 1):93-104, March 2009 (article)

Abstract
Introduction Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Objective Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. Discussion MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Generating Spike Trains with Specified Correlation Coefficients

Macke, J., Berens, P., Ecker, A., Tolias, A., Bethge, M.

Neural Computation, 21(2):397-423, February 2009 (article)

Abstract
Spike trains recorded from populations of neurons can exhibit substantial pairwise correlations between neurons and rich temporal structure. Thus, for the realistic simulation and analysis of neural systems, it is essential to have efficient methods for generating artificial spike trains with specified correlation structure. Here we show how correlated binary spike trains can be simulated by means of a latent multivariate gaussian model. Sampling from the model is computationally very efficient and, in particular, feasible even for large populations of neurons. The entropy of the model is close to the theoretical maximum for a wide range of parameters. In addition, this framework naturally extends to correlations over time and offers an elegant way to model correlated neural spike counts with arbitrary marginal distributions.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic detection of preclinical neurodegeneration: Presymptomatic Huntington disease

Klöppel, S., Chu, C., Tan, G., Draganski, B., Johnson, H., Paulsen, J., Kienzle, W., Tabrizi, S., Ashburner, J., Frackowiak, R.

Neurology, 72(5):426-431, February 2009 (article)

Abstract
Background: Treatment of neurodegenerative diseases is likely to be most beneficial in the very early, possibly preclinical stages of degeneration. We explored the usefulness of fully automatic structural MRI classification methods for detecting subtle degenerative change. The availability of a definitive genetic test for Huntington disease (HD) provides an excellent metric for judging the performance of such methods in gene mutation carriers who are free of symptoms. Methods: Using the gray matter segment of MRI scans, this study explored the usefulness of a multivariate support vector machine to automatically identify presymptomatic HD gene mutation carriers (PSCs) in the absence of any a priori information. A multicenter data set of 96 PSCs and 95 age- and sex-matched controls was studied. The PSC group was subclassified into three groups based on time from predicted clinical onset, an estimate that is a function of DNA mutation size and age. Results: Subjects with at least a 33% chance of developing unequivocal signs of HD in 5 years were correctly assigned to the PSC group 69% of the time. Accuracy improved to 83% when regions affected by the disease were selected a priori for analysis. Performance was at chance when the probability of developing symptoms in 5 years was less than 10%. Conclusions: Presymptomatic Huntington disease gene mutation carriers close to estimated diagnostic onset were successfully separated from controls on the basis of single anatomic scans, without additional a priori information. Prior information is required to allow separation when degenerative changes are either subtle or variable.

ei

Web [BibTex]

Web [BibTex]


no image
Enumeration of condition-dependent dense modules in protein interaction networks

Georgii, E., Dietmann, S., Uno, T., Pagel, P., Tsuda, K.

Bioinformatics, 25(7):933-940, February 2009 (article)

Abstract
Motivation: Modern systems biology aims at understanding how the different molecular components of a biological cell interact. Often, cellular functions are performed by complexes consisting of many different proteins. The composition of these complexes may change according to the cellular environment, and one protein may be involved in several different processes. The automatic discovery of functional complexes from protein interaction data is challenging. While previous approaches use approximations to extract dense modules, our approach exactly solves the problem of dense module enumeration. Furthermore, constraints from additional information sources such as gene expression and phenotype data can be integrated, so we can systematically mine for dense modules with interesting profiles. Results: Given a weighted protein interaction network, our method discovers all protein sets that satisfy a user-defined minimum density threshold. We employ a reverse search strategy, which allows us to exploit the density criterion in an efficient way. Our experiments show that the novel approach is feasible and produces biologically meaningful results. In comparative validation studies using yeast data, the method achieved the best overall prediction performance with respect to confirmed complexes. Moreover, by enhancing the yeast network with phenotypic and phylogenetic profiles and the human network with tissue-specific expression data, we identified condition-dependent complex variants.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Prototype Classification: Insights from Machine Learning

Graf, A., Bousquet, O., Rätsch, G., Schölkopf, B.

Neural Computation, 21(1):272-300, January 2009 (article)

Abstract
We shed light on the discrimination between patterns belonging to two different classes by casting this decoding problem into a generalized prototype framework. The discrimination process is then separated into two stages: a projection stage that reduces the dimensionality of the data by projecting it on a line and a threshold stage where the distributions of the projected patterns of both classes are separated. For this, we extend the popular mean-of-class prototype classification using algorithms from machine learning that satisfy a set of invariance properties. We report a simple yet general approach to express different types of linear classification algorithms in an identical and easy-to-visualize formal framework using generalized prototypes where these prototypes are used to express the normal vector and offset of the hyperplane. We investigate nonmargin classifiers such as the classical prototype classifier, the Fisher classifier, and the relevance vector machine. We then study hard and soft margin cl assifiers such as the support vector machine and a boosted version of the prototype classifier. Subsequently, we relate mean-of-class prototype classification to other classification algorithms by showing that the prototype classifier is a limit of any soft margin classifier and that boosting a prototype classifier yields the support vector machine. While giving novel insights into classification per se by presenting a common and unified formalism, our generalized prototype framework also provides an efficient visualization and a principled comparison of machine learning classification.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
The DICS repository: module-assisted analysis of disease-related gene lists

Dietmann, S., Georgii, E., Antonov, A., Tsuda, K., Mewes, H.

Bioinformatics, 25(6):830-831, January 2009 (article)

Abstract
The DICS database is a dynamic web repository of computationally predicted functional modules from the human protein–protein interaction network. It provides references to the CORUM, DrugBank, KEGG and Reactome pathway databases. DICS can be accessed for retrieving sets of overlapping modules and protein complexes that are significantly enriched in a gene list, thereby providing valuable information about the functional context.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Large Margin Methods for Part of Speech Tagging

Altun, Y.

In Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods, pages: 141-160, (Editors: Keshet, J. and Bengio, S.), Wiley, Hoboken, NJ, USA, January 2009 (inbook)

ei

Web [BibTex]

Web [BibTex]


no image
Motor Control and Learning in Table Tennis

Mülling, K.

Eberhard Karls Universität Tübingen, Gerrmany, 2009 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
Hierarchical Clustering and Density Estimation Based on k-nearest-neighbor graphs

Drewe, P.

Eberhard Karls Universität Tübingen, Germany, 2009 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
mGene: accurate SVM-based gene finding with an application to nematode genomes

Schweikert, G., Zien, A., Zeller, G., Behr, J., Dieterich, C., Ong, C., Philips, P., De Bona, F., Hartmann, L., Bohlen, A., Krüger, N., Sonnenburg, S., Rätsch, G.

Genome Research, 19(11):2133-43, 2009 (article)

Abstract
We present a highly accurate gene-prediction system for eukaryotic genomes, called mGene. It combines in an unprecedented manner the flexibility of generalized hidden Markov models (gHMMs) with the predictive power of modern machine learning methods, such as Support Vector Machines (SVMs). Its excellent performance was proved in an objective competition based on the genome of the nematode Caenorhabditis elegans. Considering the average of sensitivity and specificity, the developmental version of mGene exhibited the best prediction performance on nucleotide, exon, and transcript level for ab initio and multiple-genome gene-prediction tasks. The fully developed version shows superior performance in 10 out of 12 evaluation criteria compared with the other participating gene finders, including Fgenesh++ and Augustus. An in-depth analysis of mGene's genome-wide predictions revealed that approximately 2200 predicted genes were not contained in the current genome annotation. Testing a subset of 57 of these genes by RT-PCR and sequencing, we confirmed expression for 24 (42%) of them. mGene missed 300 annotated genes, out of which 205 were unconfirmed. RT-PCR testing of 24 of these genes resulted in a success rate of merely 8%. These findings suggest that even the gene catalog of a well-studied organism such as C. elegans can be substantially improved by mGene's predictions. We also provide gene predictions for the four nematodes C. briggsae, C. brenneri, C. japonica, and C. remanei. Comparing the resulting proteomes among these organisms and to the known protein universe, we identified many species-specific gene inventions. In a quality assessment of several available annotations for these genomes, we find that mGene's predictions are most accurate.

ei

DOI [BibTex]

DOI [BibTex]


no image
Learning with Structured Data: Applications to Computer Vision

Nowozin, S.

Technische Universität Berlin, Germany, 2009 (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Structure and activity of the N-terminal substrate recognition domains in proteasomal ATPases

Djuranovic, S., Hartmann, MD., Habeck, M., Ursinus, A., Zwickl, P., Martin, J., Lupas, AN., Zeth, K.

Molecular Cell, 34(5):580-590, 2009 (article)

Abstract
The proteasome forms the core of the protein quality control system in archaea and eukaryotes and also occurs in one bacterial lineage, the Actinobacteria. Access to its proteolytic compartment is controlled by AAA ATPases, whose N-terminal domains (N domains) are thought to mediate substrate recognition. The N domains of an archaeal proteasomal ATPase, Archaeoglobus fulgidus PAN, and of its actinobacterial homolog, Rhodococcus erythropolis ARC, form hexameric rings, whose subunits consist of an N-terminal coiled coil and a C-terminal OB domain. In ARC-N, the OB domains are duplicated and form separate rings. PAN-N and ARC-N can act as chaperones, preventing the aggregation of heterologous proteins in vitro, and this activity is preserved in various chimeras, even when these include coiled coils and OB domains from unrelated proteins. The structures suggest a molecular mechanism for substrate processing based on concerted radial motions of the coiled coils relative to the OB rings.

ei

DOI [BibTex]

DOI [BibTex]


no image
Discussion of: Brownian Distance Covariance

Gretton, A., Fukumizu, K., Sriperumbudur, B.

The Annals of Applied Statistics, 3(4):1285-1294, 2009 (article)

ei

[BibTex]

[BibTex]


no image
Covariate shift and local learning by distribution matching

Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., Schölkopf, B.

In Dataset Shift in Machine Learning, pages: 131-160, (Editors: Quiñonero-Candela, J., Sugiyama, M., Schwaighofer, A. and Lawrence, N. D.), MIT Press, Cambridge, MA, USA, 2009 (inbook)

Abstract
Given sets of observations of training and test data, we consider the problem of re-weighting the training data such that its distribution more closely matches that of the test data. We achieve this goal by matching covariate distributions between training and test sets in a high dimensional feature space (specifically, a reproducing kernel Hilbert space). This approach does not require distribution estimation. Instead, the sample weights are obtained by a simple quadratic programming procedure. We provide a uniform convergence bound on the distance between the reweighted training feature mean and the test feature mean, a transductive bound on the expected loss of an algorithm trained on the reweighted data, and a connection to single class SVMs. While our method is designed to deal with the case of simple covariate shift (in the sense of Chapter ??), we have also found benefits for sample selection bias on the labels. Our correction procedure yields its greatest and most consistent advantages when the learning algorithm returns a classifier/regressor that is simpler" than the data might suggest.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
From Differential Equations to Differential Geometry: Aspects of Regularisation in Machine Learning

Steinke, F.

Universität des Saarlandes, Saarbrücken, Germany, 2009 (phdthesis)

ei

PDF [BibTex]


no image
Non-linear System Identification: Visual Saliency Inferred from Eye-Movement Data

Wichmann, F., Kienzle, W., Schölkopf, B., Franz, M.

Journal of Vision, 9(8):article 32, 2009 (article)

Abstract
For simple visual patterns under the experimenter's control we impose which information, or features, an observer can use to solve a given perceptual task. For natural vision tasks, however, there are typically a multitude of potential features in a given visual scene which the visual system may be exploiting when analyzing it: edges, corners, contours, etc. Here we describe a novel non-linear system identification technique based on modern machine learning methods that allows the critical features an observer uses to be inferred directly from the observer's data. The method neither requires stimuli to be embedded in noise nor is it limited to linear perceptive fields (classification images). We demonstrate our technique by deriving the critical image features observers fixate in natural scenes (bottom-up visual saliency). Unlike previous studies where the relevant structure is determined manually—e.g. by selecting Gabors as visual filters—we do not make any assumptions in this regard, but numerically infer number and properties them from the eye-movement data. We show that center-surround patterns emerge as the optimal solution for predicting saccade targets from local image structure. The resulting model, a one-layer feed-forward network with contrast gain-control, is surprisingly simple compared to previously suggested saliency models. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.

ei

Web DOI [BibTex]


no image
mGene.web: a web service for accurate computational gene finding

Schweikert, G., Behr, J., Zien, A., Zeller, G., Ong, C., Sonnenburg, S., Rätsch, G.

Nucleic Acids Research, 37, pages: W312-6, 2009 (article)

Abstract
We describe mGene.web, a web service for the genome-wide prediction of protein coding genes from eukaryotic DNA sequences. It offers pre-trained models for the recognition of gene structures including untranslated regions in an increasing number of organisms. With mGene.web, users have the additional possibility to train the system with their own data for other organisms on the push of a button, a functionality that will greatly accelerate the annotation of newly sequenced genomes. The system is built in a highly modular way, such that individual components of the framework, like the promoter prediction tool or the splice site predictor, can be used autonomously. The underlying gene finding system mGene is based on discriminative machine learning techniques and its high accuracy has been demonstrated in an international competition on nematode genomes. mGene.web is available at http://www.mgene.org/web, it is free of charge and can be used for eukaryotic genomes of small to moderate size (several hundred Mbp).

ei

DOI [BibTex]

DOI [BibTex]

2003


no image
Concentration Inequalities for Sub-Additive Functions Using the Entropy Method

Bousquet, O.

Stochastic Inequalities and Applications, 56, pages: 213-247, Progress in Probability, (Editors: Giné, E., C. Houdré and D. Nualart), November 2003 (article)

Abstract
We obtain exponential concentration inequalities for sub-additive functions of independent random variables under weak conditions on the increments of those functions, like the existence of exponential moments for these increments. As a consequence of these general inequalities, we obtain refinements of Talagrand's inequality for empirical processes and new bounds for randomized empirical processes. These results are obtained by further developing the entropy method introduced by Ledoux.

ei

PostScript [BibTex]

2003


PostScript [BibTex]


no image
Real-Time Face Detection

Kienzle, W.

Biologische Kybernetik, Eberhard-Karls-Universitaet Tuebingen, Tuebingen, Germany, October 2003 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
Statistical Learning Theory, Capacity and Complexity

Schölkopf, B.

Complexity, 8(4):87-94, July 2003 (article)

Abstract
We give an exposition of the ideas of statistical learning theory, followed by a discussion of how a reinterpretation of the insights of learning theory could potentially also benefit our understanding of a certain notion of complexity.

ei

Web DOI [BibTex]


no image
Dealing with large Diagonals in Kernel Matrices

Weston, J., Schölkopf, B., Eskin, E., Leslie, C., Noble, W.

Annals of the Institute of Statistical Mathematics, 55(2):391-408, June 2003 (article)

Abstract
In kernel methods, all the information about the training data is contained in the Gram matrix. If this matrix has large diagonal values, which arises for many types of kernels, then kernel methods do not perform well: We propose and test several methods for dealing with this problem by reducing the dynamic range of the matrix while preserving the positive definiteness of the Hessian of the quadratic programming problem that one has to solve when training a Support Vector Machine, which is a common kernel approach for pattern recognition.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
The em Algorithm for Kernel Matrix Completion with Auxiliary Data

Tsuda, K., Akaho, S., Asai, K.

Journal of Machine Learning Research, 4, pages: 67-81, May 2003 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
Constructing Descriptive and Discriminative Non-linear Features: Rayleigh Coefficients in Kernel Feature Spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5):623-628, May 2003 (article)

Abstract
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher‘s discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach.

ei

DOI [BibTex]

DOI [BibTex]


no image
Tractable Inference for Probabilistic Data Models

Csato, L., Opper, M., Winther, O.

Complexity, 8(4):64-68, April 2003 (article)

Abstract
We present an approximation technique for probabilistic data models with a large number of hidden variables, based on ideas from statistical physics. We give examples for two nontrivial applications. © 2003 Wiley Periodicals, Inc.

ei

PDF GZIP Web [BibTex]

PDF GZIP Web [BibTex]


no image
Feature selection and transduction for prediction of molecular bioactivity for drug design

Weston, J., Perez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Schölkopf, B.

Bioinformatics, 19(6):764-771, April 2003 (article)

Abstract
Motivation: In drug discovery a key task is to identify characteristics that separate active (binding) compounds from inactive (non-binding) ones. An automated prediction system can help reduce resources necessary to carry out this task. Results: Two methods for prediction of molecular bioactivity for drug design are introduced and shown to perform well in a data set previously studied as part of the KDD (Knowledge Discovery and Data Mining) Cup 2001. The data is characterized by very few positive examples, a very large number of features (describing three-dimensional properties of the molecules) and rather different distributions between training and test data. Two techniques are introduced specifically to tackle these problems: a feature selection method for unbalanced data and a classifier which adapts to the distribution of the the unlabeled test data (a so-called transductive method). We show both techniques improve identification performance and in conjunction provide an improvement over using only one of the techniques. Our results suggest the importance of taking into account the characteristics in this data which may also be relevant in other problems of a similar type.

ei

Web [BibTex]


no image
Use of the Zero-Norm with Linear Models and Kernel Methods

Weston, J., Elisseeff, A., Schölkopf, B., Tipping, M.

Journal of Machine Learning Research, 3, pages: 1439-1461, March 2003 (article)

Abstract
We explore the use of the so-called zero-norm of the parameters of linear models in learning. Minimization of such a quantity has many uses in a machine learning context: for variable or feature selection, minimizing training error and ensuring sparsity in solutions. We derive a simple but practical method for achieving these goals and discuss its relationship to existing techniques of minimizing the zero-norm. The method boils down to implementing a simple modification of vanilla SVM, namely via an iterative multiplicative rescaling of the training data. Applications we investigate which aid our discussion include variable and feature selection on biological microarray data, and multicategory classification.

ei

PDF PostScript PDF [BibTex]

PDF PostScript PDF [BibTex]


no image
An Introduction to Variable and Feature Selection.

Guyon, I., Elisseeff, A.

Journal of Machine Learning, 3, pages: 1157-1182, 2003 (article)

ei

[BibTex]

[BibTex]


no image
Extension of the nu-SVM range for classification

Perez-Cruz, F., Weston, J., Herrmann, D., Schölkopf, B.

In Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer and Systems Sciences, Vol. 190, 190, pages: 179-196, NATO Science Series III: Computer and Systems Sciences, (Editors: J Suykens and G Horvath and S Basu and C Micchelli and J Vandewalle), IOS Press, Amsterdam, 2003 (inbook)

ei

[BibTex]

[BibTex]


no image
m-Alternative Forced Choice—Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F.

Biologische Kybernetik, Graduate School for Neural and Behavioural Sciences, Tübingen, 2003 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
New Approaches to Statistical Learning Theory

Bousquet, O.

Annals of the Institute of Statistical Mathematics, 55(2):371-389, 2003 (article)

Abstract
We present new tools from probability theory that can be applied to the analysis of learning algorithms. These tools allow to derive new bounds on the generalization performance of learning algorithms and to propose alternative measures of the complexity of the learning task, which in turn can be used to derive new learning algorithms.

ei

PostScript [BibTex]

PostScript [BibTex]


no image
An Introduction to Support Vector Machines

Schölkopf, B.

In Recent Advances and Trends in Nonparametric Statistics , pages: 3-17, (Editors: MG Akritas and DN Politis), Elsevier, Amsterdam, The Netherlands, 2003 (inbook)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical Learning and Kernel Methods in Bioinformatics

Schölkopf, B., Guyon, I., Weston, J.

In Artificial Intelligence and Heuristic Methods in Bioinformatics, 183, pages: 1-21, 3, (Editors: P Frasconi und R Shamir), IOS Press, Amsterdam, The Netherlands, 2003 (inbook)

ei

[BibTex]

[BibTex]


no image
A Short Introduction to Learning with Kernels

Schölkopf, B., Smola, A.

In Proceedings of the Machine Learning Summer School, Lecture Notes in Artificial Intelligence, Vol. 2600, pages: 41-64, LNAI 2600, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Heidelberg, Germany, 2003 (inbook)

ei

[BibTex]

[BibTex]


no image
Bayesian Kernel Methods

Smola, A., Schölkopf, B.

In Advanced Lectures on Machine Learning, Machine Learning Summer School 2002, Lecture Notes in Computer Science, Vol. 2600, LNAI 2600, pages: 65-117, 0, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Germany, 2003 (inbook)

ei

DOI [BibTex]

DOI [BibTex]


no image
Stability of ensembles of kernel machines

Elisseeff, A., Pontil, M.

In 190, pages: 111-124, NATO Science Series III: Computer and Systems Science, (Editors: Suykens, J., G. Horvath, S. Basu, C. Micchelli and J. Vandewalle), IOS press, Netherlands, 2003 (inbook)

ei

[BibTex]

[BibTex]

2000


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

ei

[BibTex]

2000


[BibTex]

1998


no image
SVMs — a practical consequence of learning theory

Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-21, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

ei

PDF Web DOI [BibTex]

1998


PDF Web DOI [BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H.

Autonomous Robots, 5(1):111-125, March 1998 (article)

Abstract
We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]