Header logo is


2008


no image
BCPy2000

Hill, N., Schreiner, T., Puzicha, C., Farquhar, J.

Workshop "Machine Learning Open-Source Software" at NIPS, December 2008 (talk)

ei

Web [BibTex]

2008


Web [BibTex]


no image
Logistic Regression for Graph Classification

Shervashidze, N., Tsuda, K.

NIPS Workshop on "Structured Input - Structured Output" (NIPS SISO), December 2008 (talk)

Abstract
In this paper we deal with graph classification. We propose a new algorithm for performing sparse logistic regression for graphs, which is comparable in accuracy with other methods of graph classification and produces probabilistic output in addition. Sparsity is required for the reason of interpretability, which is often necessary in domains such as bioinformatics or chemoinformatics.

ei

Web [BibTex]

Web [BibTex]


no image
New Projected Quasi-Newton Methods with Applications

Sra, S.

Microsoft Research Tech-talk, December 2008 (talk)

Abstract
Box-constrained convex optimization problems are central to several applications in a variety of fields such as statistics, psychometrics, signal processing, medical imaging, and machine learning. Two fundamental examples are the non-negative least squares (NNLS) problem and the non-negative Kullback-Leibler (NNKL) divergence minimization problem. The non-negativity constraints are usually based on an underlying physical restriction, for e.g., when dealing with applications in astronomy, tomography, statistical estimation, or image restoration, the underlying parameters represent physical quantities such as concentration, weight, intensity, or frequency counts and are therefore only interpretable with non-negative values. Several modern optimization methods can be inefficient for simple problems such as NNLS and NNKL as they are really designed to handle far more general and complex problems. In this work we develop two simple quasi-Newton methods for solving box-constrained (differentiable) convex optimization problems that utilize the well-known BFGS and limited memory BFGS updates. We position our method between projected gradient (Rosen, 1960) and projected Newton (Bertsekas, 1982) methods, and prove its convergence under a simple Armijo step-size rule. We illustrate our method by showing applications to: Image deblurring, Positron Emission Tomography (PET) image reconstruction, and Non-negative Matrix Approximation (NMA). On medium sized data we observe performance competitive to established procedures, while for larger data the results are even better.

ei

PDF [BibTex]

PDF [BibTex]


no image
Variational Bayesian Model Selection in Linear Gaussian State-Space based Models

Chiappa, S.

International Workshop on Flexible Modelling: Smoothing and Robustness (FMSR 2008), 2008, pages: 1, November 2008 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
MR-Based PET Attenuation Correction: Initial Results for Whole Body

Hofmann, M., Steinke, F., Aschoff, P., Lichy, M., Brady, M., Schölkopf, B., Pichler, B.

Medical Imaging Conference, October 2008 (talk)

ei

[BibTex]

[BibTex]


no image
Nonparametric Indepedence Tests: Space Partitioning and Kernel Approaches

Gretton, A., Györfi, L.

19th International Conference on Algorithmic Learning Theory (ALT08), October 2008 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Towards the neural basis of the flash-lag effect

Ecker, A., Berens, P., Hoenselaar, A., Subramaniyan, M., Tolias, A., Bethge, M.

International Workshop on Aspects of Adaptive Cortex Dynamics, 2008, pages: 1, September 2008 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Reinforcement Learning for Motor Primitives

Kober, J.

Biologische Kybernetik, University of Stuttgart, Stuttgart, Germany, August 2008 (diplomathesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Asymmetries of Time Series under Inverting their Direction

Peters, J.

Biologische Kybernetik, University of Heidelberg, August 2008 (diplomathesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
mGene: A Novel Discriminative Gene Finder

Schweikert, G., Zeller, G., Zien, A., Behr, J., Sonnenburg, S., Philips, P., Ong, C., Rätsch, G.

Worm Genomics and Systems Biology meeting, July 2008 (talk)

ei

[BibTex]

[BibTex]


no image
Learning an Interest Operator from Human Eye Movements

Kienzle, W.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, July 2008 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Policy Learning: A Unified Perspective With Applications In Robotics

Peters, J., Kober, J., Nguyen-Tuong, D.

8th European Workshop on Reinforcement Learning for Robotics (EWRL 2008), 8, pages: 10, July 2008 (poster)

Abstract
Policy Learning approaches are among the best suited methods for high-dimensional, continuous control systems such as anthropomorphic robot arms and humanoid robots. In this paper, we show two contributions: firstly, we show a unified perspective which allows us to derive several policy learning al- gorithms from a common point of view, i.e, policy gradient algorithms, natural- gradient algorithms and EM-like policy learning. Secondly, we present several applications to both robot motor primitive learning as well as to robot control in task space. Results both from simulation and several different real robots are shown.

ei

PDF [BibTex]

PDF [BibTex]


no image
Discovering Common Sequence Variation in Arabidopsis thaliana

Rätsch, G., Clark, R., Schweikert, G., Toomajian, C., Ossowski, S., Zeller, G., Shinn, P., Warthman, N., Hu, T., Fu, G., Hinds, D., Cheng, H., Frazer, K., Huson, D., Schölkopf, B., Nordborg, M., Ecker, J., Weigel, D., Schneeberger, K., Bohlen, A.

16th Annual International Conference Intelligent Systems for Molecular Biology (ISMB), July 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Coding Theory in Brain-Computer Interfaces

Martens, SMM.

Soria Summerschool on Computational Mathematics "Algebraic Coding Theory" (S3CM), July 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Motor Skill Learning for Cognitive Robotics

Peters, J.

6th International Cognitive Robotics Workshop (CogRob), July 2008 (talk)

Abstract
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this tutorial, we give a general overview on motor skill learning for cognitive robotics using research at ATR, USC, CMU and Max-Planck in order to illustrate the problems in motor skill learning. For doing so, we discuss task-appropriate representations and algorithms for learning robot motor skills. Among the topics are the learning basic movements or motor primitives by imitation and reinforcement learning, learning rhytmic and discrete movements, fast regression methods for learning inverse dynamics and setups for learning task-space policies. Examples on various robots, e.g., SARCOS DB, the SARCOS Master Arm, BDI Little Dog and a Barrett WAM, are shown and include Ball-in-a-Cup, T-Ball, Juggling, Devil-Sticking, Operational Space Control and many others.

ei

Web [BibTex]

Web [BibTex]


no image
Reinforcement Learning of Perceptual Coupling for Motor Primitives

Kober, J., Peters, J.

8th European Workshop on Reinforcement Learning for Robotics (EWRL 2008), 8, pages: 16, July 2008 (poster)

Abstract
Reinforcement learning is a natural choice for the learning of complex motor tasks by reward-related self-improvement. As the space of movements is high-dimensional and continuous, a policy parametrization is needed which can be used in this context. Traditional motor primitive approaches deal largely with open-loop policies which can only deal with small perturbations. In this paper, we present a new type of motor primitive policies which serve as closed-loop policies together with an appropriate learning algorithm. Our new motor primitives are an augmented version version of the dynamic systems motor primitives that incorporates perceptual coupling to external variables. We show that these motor primitives can perform complex tasks such a Ball-in-a-Cup or Kendama task even with large variances in the initial conditions where a human would hardly be able to learn this task. We initialize the open-loop policies by imitation learning and the perceptual coupling with a handcrafted solution. We first improve the open-loop policies and subsequently the perceptual coupling using a novel reinforcement learning method which is particularly well-suited for motor primitives.

ei

PDF [BibTex]

PDF [BibTex]


no image
Painless Embeddings of Distributions: the Function Space View (Part 1)

Fukumizu, K., Gretton, A., Smola, A.

25th International Conference on Machine Learning (ICML), July 2008 (talk)

Abstract
This tutorial will give an introduction to the recent understanding and methodology of the kernel method: dealing with higher order statistics by embedding painlessly random variables/probability distributions. In the early days of kernel machines research, the "kernel trick" was considered a useful way of constructing nonlinear algorithms from linear ones. More recently, however, it has become clear that a potentially more far reaching use of kernels is as a linear way of dealing with higher order statistics by embedding distributions in a suitable reproducing kernel Hilbert space (RKHS). Notably, unlike the straightforward expansion of higher order moments or conventional characteristic function approach, the use of kernels or RKHS provides a painless, tractable way of embedding distributions. This line of reasoning leads naturally to the questions: what does it mean to embed a distribution in an RKHS? when is this embedding injective (and thus, when do different distributions have unique mappings)? what implications are there for learning algorithms that make use of these embeddings? This tutorial aims at answering these questions. There are a great variety of applications in machine learning and computer science, which require distribution estimation and/or comparison.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Reinforcement Learning for Robotics

Peters, J.

8th European Workshop on Reinforcement Learning for Robotics (EWRL), July 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Flexible Models for Population Spike Trains

Bethge, M., Macke, J., Berens, P., Ecker, A., Tolias, A.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 52, June 2008 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Pairwise Correlations and Multineuronal Firing Patterns in the Primary Visual Cortex of the Awake, Behaving Macaque

Berens, P., Ecker, A., Subramaniyan, M., Macke, J., Hauck, P., Bethge, M., Tolias, A.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 48, June 2008 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Visual saliency re-visited: Center-surround patterns emerge as optimal predictors for human fixation targets

Wichmann, F., Kienzle, W., Schölkopf, B., Franz, M.

Journal of Vision, 8(6):635, 8th Annual Meeting of the Vision Sciences Society (VSS), June 2008 (poster)

Abstract
Humans perceives the world by directing the center of gaze from one location to another via rapid eye movements, called saccades. In the period between saccades the direction of gaze is held fixed for a few hundred milliseconds (fixations). It is primarily during fixations that information enters the visual system. Remarkably, however, after only a few fixations we perceive a coherent, high-resolution scene despite the visual acuity of the eye quickly decreasing away from the center of gaze: This suggests an effective strategy for selecting saccade targets. Top-down effects, such as the observer's task, thoughts, or intentions have an effect on saccadic selection. Equally well known is that bottom-up effects-local image structure-influence saccade targeting regardless of top-down effects. However, the question of what the most salient visual features are is still under debate. Here we model the relationship between spatial intensity patterns in natural images and the response of the saccadic system using tools from machine learning. This allows us to identify the most salient image patterns that guide the bottom-up component of the saccadic selection system, which we refer to as perceptive fields. We show that center-surround patterns emerge as the optimal solution to the problem of predicting saccade targets. Using a novel nonlinear system identification technique we reduce our learned classifier to a one-layer feed-forward network which is surprisingly simple compared to previously suggested models assuming more complex computations such as multi-scale processing, oriented filters and lateral inhibition. Nevertheless, our model is equally predictive and generalizes better to novel image sets. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Analysis of Pattern Recognition Methods in Classifying Bold Signals in Monkeys at 7-Tesla

Ku, S., Gretton, A., Macke, J., Tolias, A., Logothetis, N.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 67, June 2008 (poster)

Abstract
Pattern recognition methods have shown that fMRI data can reveal significant information about brain activity. For example, in the debate of how object-categories are represented in the brain, multivariate analysis has been used to provide evidence of distributed encoding schemes. Many follow-up studies have employed different methods to analyze human fMRI data with varying degrees of success. In this study we compare four popular pattern recognition methods: correlation analysis, support-vector machines (SVM), linear discriminant analysis and Gaussian naïve Bayes (GNB), using data collected at high field (7T) with higher resolution than usual fMRI studies. We investigate prediction performance on single trials and for averages across varying numbers of stimulus presentations. The performance of the various algorithms depends on the nature of the brain activity being categorized: for several tasks, many of the methods work well, whereas for others, no methods perform above chance level. An important factor in overall classification performance is careful preprocessing of the data, including dimensionality reduction, voxel selection, and outlier elimination.

ei

[BibTex]

[BibTex]


no image
Thin-Plate Splines Between Riemannian Manifolds

Steinke, F., Hein, M., Schölkopf, B.

Workshop on Geometry and Statistics of Shapes, June 2008 (talk)

Abstract
With the help of differential geometry we describe a framework to define a thin-plate spline like energy for maps between arbitrary Riemannian manifolds. The so-called Eells energy only depends on the intrinsic geometry of the input and output manifold, but not on their respective representation. The energy can then be used for regression between manifolds, we present results for cases where the outputs are rotations, sets of angles, or points on 3D surfaces. In the future we plan to also target regression where the output is an element of "shape space", understood as a Riemannian manifold. One could also further explore the meaning of the Eells energy when applied to diffeomorphisms between shapes, especially with regard to its potential use as a distance measure between shapes that does not depend on the embedding or the parametrisation of the shapes.

ei

Web [BibTex]

Web [BibTex]


no image
Learning resolved velocity control

Peters, J.

2008 IEEE International Conference on Robotics and Automation (ICRA), May 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Causal inference from statistical data

Sun, X.

Biologische Kybernetik, Technische Hochschule Karlsruhe, Karlsruhe, Germany, April 2008 (phdthesis)

ei

Web [BibTex]

Web [BibTex]


no image
Pairwise Correlations and Multineuronal Firing Patterns in Primary Visual Cortex

Berens, P.

Biologische Kybernetik, Eberhard Karls Universität Tübingen, Tübingen, Germany, April 2008 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
Bayesian methods for protein structure determination

Habeck, M.

Machine Learning in Structural Bioinformatics, April 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
The role of stimulus correlations for population decoding in the retina

Schwartz, G., Macke, J., Berry, M.

Computational and Systems Neuroscience 2008 (COSYNE 2008), 5, pages: 172, March 2008 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Development and Application of a Python Scripting Framework for BCI2000

Schreiner, T.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, January 2008 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
Efficient and Invariant Regularisation with Application to Computer Graphics

Walder, CJ.

Biologische Kybernetik, University of Queensland, Brisbane, Australia, January 2008 (phdthesis)

Abstract
This thesis develops the theory and practise of reproducing kernel methods. Many functional inverse problems which arise in, for example, machine learning and computer graphics, have been treated with practical success using methods based on a reproducing kernel Hilbert space perspective. This perspective is often theoretically convenient, in that many functional analysis problems reduce to linear algebra problems in these spaces. Somewhat more complex is the case of conditionally positive definite kernels, and we provide an introduction to both cases, deriving in a particularly elementary manner some key results for the conditionally positive definite case. A common complaint of the practitioner is the long running time of these kernel based algorithms. We provide novel ways of alleviating these problems by essentially using a non-standard function basis which yields computational advantages. That said, by doing so we must also forego the aforementioned theoretical conveniences, and hence need some additional analysis which we provide in order to make the approach practicable. We demonstrate that the method leads to state of the art performance on the problem of surface reconstruction from points. We also provide some analysis of kernels invariant to transformations such as translation and dilation, and show that this indicates the value of learning algorithms which use conditionally positive definite kernels. Correspondingly, we provide a few approaches for making such algorithms practicable. We do this either by modifying the kernel, or directly solving problems with conditionally positive definite kernels, which had previously only been solved with positive definite kernels. We demonstrate the advantage of this approach, in particular by attaining state of the art classification performance with only one free parameter.

ei

PDF [BibTex]

PDF [BibTex]


no image
Ab-initio Elektronentheorie der magnetischen Anisotropie im System FePt mit der Clusterentwicklungsmethode

Subkow, S.

Universität Stuttgart, Stuttgart, 2008 (mastersthesis)

mms

[BibTex]


no image
Magnetism of amorphous and highly anisotropic multilayer systems on flat substrates and nanospheres

Amaladass, E. P.

Universität Stuttgart, Stuttgart, 2008 (phdthesis)

mms

link (url) [BibTex]


no image
Bose-Theorie der Dämpfung der Bewegung einer magnetischen Domänenwand

Hähnel, D.

Universität Stuttgart, Stuttgart, 2008 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
GdFe-Multilagen zur Vergrö\sserung des magnetischen Vortexkerns

Sackmann, V.

Universität Stuttgart, Stuttgart, 2008 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Dissipative Magnetisierungsdynamik: Ein Zugang über die ab-initio Elektronentheorie

Steiauf, D.

Universität Stuttgart, Stuttgart, 2008 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Röntgenzirkulardichroische Untersuchungen XMCD an FePt und Ferrit Nanopartikeln

Nolle, D.

Universität Stuttgart, Stuttgart, 2008 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Nanostructured biointerfaces for investigating cellular adhesion and differentiation

Gojak, C.

Universität Heidelberg, Heidelberg, 2008 (mastersthesis)

mms

[BibTex]

[BibTex]

2006


no image
Some observations on the pedestal effect or dipper function

Henning, B., Wichmann, F.

Journal of Vision, 6(13):50, 2006 Fall Vision Meeting of the Optical Society of America, December 2006 (poster)

Abstract
The pedestal effect is the large improvement in the detectabilty of a sinusoidal “signal” grating observed when the signal is added to a masking or “pedestal” grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noise - noise from which a 1.5-octave band centred on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and pedestal. The spatial-frequency components of the notched noise above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. Thus the pedestal or dipper effect measured without notched noise is not a characteristic of individual spatial-frequency tuned channels.

ei

Web DOI [BibTex]

2006


Web DOI [BibTex]


no image
A Kernel Method for the Two-Sample-Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

20th Annual Conference on Neural Information Processing Systems (NIPS), December 2006 (talk)

Abstract
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. We show that the test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

ei

PDF [BibTex]

PDF [BibTex]


no image
Ab-initio gene finding using machine learning

Schweikert, G., Zeller, G., Zien, A., Ong, C., de Bona, F., Sonnenburg, S., Phillips, P., Rätsch, G.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Graph boosting for molecular QSAR analysis

Saigo, H., Kadowaki, T., Kudo, T., Tsuda, K.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Abstract
We propose a new boosting method that systematically combines graph mining and mathematical programming-based machine learning. Informative and interpretable subgraph features are greedily found by a series of graph mining calls. Due to our mathematical programming formulation, subgraph features and pre-calculated real-valued features are seemlessly integrated. We tested our algorithm on a quantitative structure-activity relationship (QSAR) problem, which is basically a regression problem when given a set of chemical compounds. In benchmark experiments, the prediction accuracy of our method favorably compared with the best results reported on each dataset.

ei

Web [BibTex]

Web [BibTex]


no image
Inferring Causal Directions by Evaluating the Complexity of Conditional Distributions

Sun, X., Janzing, D., Schölkopf, B.

NIPS Workshop on Causality and Feature Selection, December 2006 (talk)

Abstract
We propose a new approach to infer the causal structure that has generated the observed statistical dependences among n random variables. The idea is that the factorization of the joint measure of cause and effect into P(cause)P(effect|cause) leads typically to simpler conditionals than non-causal factorizations. To evaluate the complexity of the conditionals we have tried two methods. First, we have compared them to those which maximize the conditional entropy subject to the observed first and second moments since we consider the latter as the simplest conditionals. Second, we have fitted the data with conditional probability measures being exponents of functions in an RKHS space and defined the complexity by a Hilbert-space semi-norm. Such a complexity measure has several properties that are useful for our purpose. We describe some encouraging results with both methods applied to real-world data. Moreover, we have combined constraint-based approaches to causal discovery (i.e., methods using only information on conditional statistical dependences) with our method in order to distinguish between causal hypotheses which are equivalent with respect to the imposed independences. Furthermore, we compare the performance to Bayesian approaches to causal inference.

ei

Web [BibTex]


no image
Learning Optimal EEG Features Across Time, Frequency and Space

Farquhar, J., Hill, J., Schölkopf, B.

NIPS Workshop on Current Trends in Brain-Computer Interfacing, December 2006 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-Supervised Learning

Zien, A.

Advanced Methods in Sequence Analysis Lectures, November 2006 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Optimizing Spatial Filters for BCI: Margin- and Evidence-Maximization Approaches

Farquhar, J., Hill, N., Schölkopf, B.

Challenging Brain-Computer Interfaces: MAIA Workshop 2006, pages: 1, November 2006 (poster)

Abstract
We present easy-to-use alternatives to the often-used two-stage Common Spatial Pattern + classifier approach for spatial filtering and classification of Event-Related Desychnronization signals in BCI. We report two algorithms that aim to optimize the spatial filters according to a criterion more directly related to the ability of the algorithms to generalize to unseen data. Both are based upon the idea of treating the spatial filter coefficients as hyperparameters of a kernel or covariance function. We then optimize these hyper-parameters directly along side the normal classifier parameters with respect to our chosen learning objective function. The two objectives considered are margin maximization as used in Support-Vector Machines and the evidence maximization framework used in Gaussian Processes. Our experiments assessed generalization error as a function of the number of training points used, on 9 BCI competition data sets and 5 offline motor imagery data sets measured in Tubingen. Both our approaches sho w consistent improvements relative to the commonly used CSP+linear classifier combination. Strikingly, the improvement is most significant in the higher noise cases, when either few trails are used for training, or with the most poorly performing subjects. This a reversal of the usual "rich get richer" effect in the development of CSP extensions, which tend to perform best when the signal is strong enough to accurately find their additional parameters. This makes our approach particularly suitable for clinical application where high levels of noise are to be expected.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
A Machine Learning Approach for Determining the PET Attenuation Map from Magnetic Resonance Images

Hofmann, M., Steinke, F., Judenhofer, M., Claussen, C., Schölkopf, B., Pichler, B.

IEEE Medical Imaging Conference, November 2006 (talk)

Abstract
A promising new combination in multimodality imaging is MR-PET, where the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET) are combined. Although many technical problems have recently been solved, it is still an open problem to determine the attenuation map from the available MR scan, as the MR intensities are not directly related to the attenuation values. One standard approach is an atlas registration where the atlas MR image is aligned with the patient MR thus also yielding an attenuation image for the patient. We also propose another approach, which to our knowledge has not been tried before: Using Support Vector Machines we predict the attenuation value directly from the local image information. We train this well-established machine learning algorithm using small image patches. Although both approaches sometimes yielded acceptable results, they also showed their specific shortcomings: The registration often fails with large deformations whereas the prediction approach is problematic when the local image structure is not characteristic enough. However, the failures often do not coincide and integration of both information sources is promising. We therefore developed a combination method extending Support Vector Machines to use not only local image structure but also atlas registered coordinates. We demonstrate the strength of this combination approach on a number of examples.

ei

[BibTex]

[BibTex]


no image
Semi-Supervised Support Vector Machines and Application to Spam Filtering

Zien, A.

ECML Discovery Challenge Workshop, September 2006 (talk)

Abstract
After introducing the semi-supervised support vector machine (aka TSVM for "transductive SVM"), a few popular training strategies are briefly presented. Then the assumptions underlying semi-supervised learning are reviewed. Finally, two modern TSVM optimization techniques are applied to the spam filtering data sets of the workshop; it is shown that they can achieve excellent results, if the problem of the data being non-iid can be handled properly.

ei

PDF Web [BibTex]


no image
Learning Eye Movements

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

Sensory Coding And The Natural Environment, 2006, pages: 1, September 2006 (poster)

Abstract
The human visual system samples images through saccadic eye movements which rapidly change the point of fixation. Although the selection of eye movement targets depends on numerous top-down mechanisms, a number of recent studies have shown that low-level image features such as local contrast or edges play an important role. These studies typically used predefined image features which were afterwards experimentally verified. Here, we follow a complementary approach: instead of testing a set of candidate image features, we infer these hypotheses from the data, using methods from statistical learning. To this end, we train a non-linear classifier on fixated vs. randomly selected image patches without making any physiological assumptions. The resulting classifier can be essentially characterized by a nonlinear combination of two center-surround receptive fields. We find that the prediction performance of this simple model on our eye movement data is indistinguishable from the physiologically motivated model of Itti & Koch (2000) which is far more complex. In particular, we obtain a comparable performance without using any multi-scale representations, long-range interactions or oriented image features.

ei

Web [BibTex]

Web [BibTex]