Header logo is


2002


no image
Support Vector Machines and Kernel Methods: The New Generation of Learning Machines

Cristianini, N., Schölkopf, B.

AI Magazine, 23(3):31-41, 2002 (article)

ei

[BibTex]

2002



no image
Application of Monte Carlo Methods to Psychometric Function Fitting

Wichmann, F.

Proceedings of the 33rd European Conference on Mathematical Psychology, pages: 44, 2002 (poster)

Abstract
The psychometric function relates an observer's performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. Here I describe methods to (1) fitting psychometric functions, (2) assessing goodness-of-fit, and (3) providing confidence intervals for the function's parameters and other estimates derived from them. First I describe a constrained maximum-likelihood method for parameter estimation. Using Monte-Carlo simulations I demonstrate that it is important to have a fitting method that takes stimulus-independent errors (or "lapses") into account. Second, a number of goodness-of-fit tests are introduced. Because psychophysical data sets are usually rather small I advocate the use of Monte Carlo resampling techniques that do not rely on asymptotic theory for goodness-of-fit assessment. Third, a parametric bootstrap is employed to estimate the variability of fitted parameters and derived quantities such as thresholds and slopes. I describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested without incurring too high a cost in computation time. Finally I describe how the methods can be extended to test hypotheses concerning the form and shape of several psychometric functions. Software describing the methods is available (http://www.bootstrap-software.com/psignifit/), as well as articles describing the methods in detail (Wichmann&Hill, Perception&Psychophysics, 2001a,b).

ei

[BibTex]

[BibTex]


no image
Stability and Generalization

Bousquet, O., Elisseeff, A.

Journal of Machine Learning Research, 2, pages: 499-526, 2002 (article)

Abstract
We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Subspace information criterion for non-quadratic regularizers – model selection for sparse regressors

Tsuda, K., Sugiyama, M., Müller, K.

IEEE Trans Neural Networks, 13(1):70-80, 2002 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
Modeling splicing sites with pairwise correlations

Arita, M., Tsuda, K., Asai, K.

Bioinformatics, 18(Suppl 2):27-34, 2002 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
Observations on the Nyström Method for Gaussian Process Prediction

Williams, C., Rasmussen, C., Schwaighofer, A., Tresp, V.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2002 (techreport)

Abstract
A number of methods for speeding up Gaussian Process (GP) prediction have been proposed, including the Nystr{\"o}m method of Williams and Seeger (2001). In this paper we focus on two issues (1) the relationship of the Nystr{\"o}m method to the Subset of Regressors method (Poggio and Girosi 1990; Luo and Wahba, 1997) and (2) understanding in what circumstances the Nystr{\"o}m approximation would be expected to provide a good approximation to exact GP regression.

ei

PostScript [BibTex]

PostScript [BibTex]


no image
Perfusion Quantification using Gaussian Process Deconvolution

Andersen, IK., Szymkowiak, A., Rasmussen, CE., Hanson, LG., Marstrand, JR., Larsson, HBW., Hansen, LK.

Magnetic Resonance in Medicine, (48):351-361, 2002 (article)

Abstract
The quantification of perfusion using dynamic susceptibility contrast MR imaging requires deconvolution to obtain the residual impulse-response function (IRF). Here, a method using a Gaussian process for deconvolution, GPD, is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as using data from healthy volunteers. It is shown that GPD is comparable to SVD variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal to noise ratio increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Tracking a Small Set of Experts by Mixing Past Posteriors

Bousquet, O., Warmuth, M.

Journal of Machine Learning Research, 3, pages: 363-396, (Editors: Long, P.), 2002 (article)

Abstract
In this paper, we examine on-line learning problems in which the target concept is allowed to change over time. In each trial a master algorithm receives predictions from a large set of n experts. Its goal is to predict almost as well as the best sequence of such experts chosen off-line by partitioning the training sequence into k+1 sections and then choosing the best expert for each section. We build on methods developed by Herbster and Warmuth and consider an open problem posed by Freund where the experts in the best partition are from a small pool of size m. Since k >> m, the best expert shifts back and forth between the experts of the small pool. We propose algorithms that solve this open problem by mixing the past posteriors maintained by the master algorithm. We relate the number of bits needed for encoding the best partition to the loss bounds of the algorithms. Instead of paying log n for choosing the best expert in each section we first pay log (n choose m) bits in the bounds for identifying the pool of m experts and then log m bits per new section. In the bounds we also pay twice for encoding the boundaries of the sections.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
A femoral arteriovenous shunt facilitates arterial whole blood sampling in animals

Weber, B., Burger, C., Biro, P., Buck, A.

Eur J Nucl Med Mol Imaging, 29, pages: 319-323, 2002 (article)

ei

[BibTex]

[BibTex]


no image
Some Local Measures of Complexity of Convex Hulls and Generalization Bounds

Bousquet, O., Koltchinskii, V., Panchenko, D.

In Proceedings of the 15th annual conference on Computational Learning Theory, Proceedings of the 15th annual conference on Computational Learning Theory, 2002 (inproceedings)

Abstract
We investigate measures of complexity of function classes based on continuity moduli of Gaussian and Rademacher processes. For Gaussian processes, we obtain bounds on the continuity modulus on the convex hull of a function class in terms of the same quantity for the class itself. We also obtain new bounds on generalization error in terms of localized Rademacher complexities. This allows us to prove new results about generalization performance for convex hulls in terms of characteristics of the base class. As a byproduct, we obtain a simple proof of some of the known bounds on the entropy of convex hulls.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Contrast discrimination with pulse-trains in pink noise

Henning, G., Bird, C., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1259-1266, 2002 (article)

Abstract
Detection performance was measured with sinusoidal and pulse-train gratings. Although the 2.09-c/deg pulse-train, or line gratings, contained at least 8 harmonics all at equal contrast, they were no more detectable than their most detectable component. The addition of broadband pink noise designed to equalize the detectability of the components of the pulse train made the pulse train about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with a pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not affect the discrimination performance of the pulse train relative to that obtained with its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

ei

PDF [BibTex]

PDF [BibTex]


no image
A kernel approach for learning from almost orthogonal patterns

Schölkopf, B., Weston, J., Eskin, E., Leslie, C., Noble, W.

In Principles of Data Mining and Knowledge Discovery, Lecture Notes in Computer Science, 2430/2431, pages: 511-528, Lecture Notes in Computer Science, (Editors: T Elomaa and H Mannila and H Toivonen), Springer, Berlin, Germany, 13th European Conference on Machine Learning (ECML) and 6th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD'2002), 2002 (inproceedings)

ei

PostScript DOI [BibTex]

PostScript DOI [BibTex]


no image
Optimal linear estimation of self-motion - a real-world test of a model of fly tangential neurons

Franz, MO.

SAB 02 Workshop, Robotics as theoretical biology, 7th meeting of the International Society for Simulation of Adaptive Behaviour (SAB), (Editors: Prescott, T.; Webb, B.), 2002 (poster)

Abstract
The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion (see example in Fig.1). We examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an optimal linear estimator incorporating prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics of the sensor. The optimal estimator is tested on a gantry carrying an omnidirectional vision sensor that can be moved along three translational and one rotational degree of freedom. The experiments indicate that the proposed approach yields accurate results for rotation estimates, independently of the current translation and scene layout. Translation estimates, however, turned out to be sensitive to simultaneous rotation and to the particular distance distribution of the scene. The gantry experiments confirm that the receptive field organization of the tangential neurons allows them, as an ensemble, to extract self-motion from the optic flow.

ei

PDF [BibTex]

PDF [BibTex]


no image
Choosing Multiple Parameters for Support Vector Machines

Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S.

Machine Learning, 46(1):131-159, 2002 (article)

Abstract
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVM) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Infinite Mixtures of Gaussian Process Experts

Rasmussen, CE., Ghahramani, Z.

In (Editors: Dietterich, Thomas G.; Becker, Suzanna; Ghahramani, Zoubin), 2002 (inproceedings)

Abstract
We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gaussian Process (GP) regression models. Using a input-dependent adaptation of the Dirichlet Process, we implement a gating network for an infinite number of Experts. Inference in this model may be done efficiently using a Markov Chain relying on Gibbs sampling. The model allows the effective covariance function to vary with the inputs, and may handle large datasets -- thus potentially overcoming two of the biggest hurdles with GP models. Simulations show the viability of this approach.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Marginalized kernels for RNA sequence data analysis

Kin, T., Tsuda, K., Asai, K.

In Genome Informatics 2002, pages: 112-122, (Editors: Lathtop, R. H.; Nakai, K.; Miyano, S.; Takagi, T.; Kanehisa, M.), Genome Informatics, 2002, (Best Paper Award) (inproceedings)

ei

Web [BibTex]

Web [BibTex]


no image
Luminance Artifacts on CRT Displays

Wichmann, F.

In IEEE Visualization, pages: 571-574, (Editors: Moorhead, R.; Gross, M.; Joy, K. I.), IEEE Visualization, 2002 (inproceedings)

Abstract
Most visualization panels today are still built around cathode-ray tubes (CRTs), certainly on personal desktops at work and at home. Whilst capable of producing pleasing images for common applications ranging from email writing to TV and DVD presentation, it is as well to note that there are a number of nonlinear transformations between input (voltage) and output (luminance) which distort the digital and/or analogue images send to a CRT. Some of them are input-independent and hence easy to fix, e.g. gamma correction, but others, such as pixel interactions, depend on the content of the input stimulus and are thus harder to compensate for. CRT-induced image distortions cause problems not only in basic vision research but also for applications where image fidelity is critical, most notably in medicine (digitization of X-ray images for diagnostic purposes) and in forms of online commerce, such as the online sale of images, where the image must be reproduced on some output device which will not have the same transfer function as the customer's CRT. I will present measurements from a number of CRTs and illustrate how some of their shortcomings may be problematic for the aforementioned applications.

ei

[BibTex]

[BibTex]

1998


no image
Book Review: An Introduction to Fuzzy Logic for Practical Applications

Peters, J.

K{\"u}nstliche Intelligenz (KI), 98(4):60-60, November 1998 (article)

ei

[BibTex]

1998


[BibTex]


no image
Navigation mit Schnappschüssen

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H., Zell, A.

In Mustererkennung 1998, pages: 421-428, (Editors: P Levi and R-J Ahlers and F May and M Schanz), Springer, Berlin, Germany, 20th DAGM-Symposium, October 1998 (inproceedings)

Abstract
Es wird ein biologisch inspirierter Algorithmus vorgestellt, mit dem sich ein Ort wiederfinden l{\"a}sst, an dem vorher eine 360-Grad-Ansicht der Umgebung aufgenommen wurde. Die Zielrichtung wird aus der Verschiebung der Bildposition der umgebenden Landmarken im Vergleich zum Schnappschuss berechnet. Die Konvergenzeigenschaften des Algorithmus werden mathematisch untersucht und auf mobilen Robotern getestet.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Where did I take that snapshot? Scene-based homing by image matching

Franz, M., Schölkopf, B., Bülthoff, H.

Biological Cybernetics, 79(3):191-202, October 1998 (article)

Abstract
In homing tasks, the goal is often not marked by visible objects but must be inferred from the spatial relation to the visual cues in the surrounding scene. The exact computation of the goal direction would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. However, if prior assumptions about typical distance distributions are used, a snapshot taken at the goal suffices to compute the goal direction from the current view. We show that most existing approaches to scene-based homing implicitly assume an isotropic landmark distribution. As an alternative, we propose a homing scheme that uses parameterized displacement fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that both approximations do not prevent the schemes from approaching the goal with arbitrary accuracy, but lead to different errors in the computed goal direction. Mobile robot experiments are used to test the theoretical predictions and to demonstrate the practical feasibility of the new approach.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
On a Kernel-Based Method for Pattern Recognition, Regression, Approximation, and Operator Inversion

Smola, A., Schölkopf, B.

Algorithmica, 22(1-2):211-231, September 1998 (article)

Abstract
We present a kernel-based framework for pattern recognition, regression estimation, function approximation, and multiple operator inversion. Adopting a regularization-theoretic framework, the above are formulated as constrained optimization problems. Previous approaches such as ridge regression, support vector methods, and regularization networks are included as special cases. We show connections between the cost function and some properties up to now believed to apply to support vector machines only. For appropriately chosen cost functions, the optimal solution of all the problems described above can be found by solving a simple quadratic programming problem.

ei

PDF DOI [BibTex]


no image
The moon tilt illusion

Schölkopf, B.

Perception, 27(10):1229-1232, August 1998 (article)

Abstract
Besides the familiar moon illusion [eg Hershenson, 1989 The Moon illusion (Hillsdale, NJ: Lawrence Erlbaum Associates)], wherein the moon appears bigger when it is close to the horizon, there is a less known illusion which causes the moon‘s illuminated side to appear turned away from the direction of the sun. An experiment documenting the effect is described, and a possible explanation is put forward.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Characterization of the oligomerization defects of two p53 mutants found in families with Li-Fraumeni and Li-Fraumeni-like syndrome.

Davison, T., Yin, P., Nie, E., Kay, C., CH, ..

Oncogene, 17(5):651-656, August 1998 (article)

Abstract
Recently two germline mutations in the oligomerization domain of p53 have been identified in patients with Li-Fraumeni and Li-Fraumeni-like Syndromes. We have used biophysical and biochemical methods to characterize these two mutants in order to better understand their functional defects and the role of the p53 oligomerization domain (residues 325-355) in oncogenesis. We find that residues 310-360 of the L344P mutant are monomeric, apparently unfolded and cannot interact with wild-type (WT) p53. The full length L344P protein is unable to bind sequence specifically to DNA and is therefore an inactive, but not a dominant negative mutant. R337C, on the other hand, can form dimers and tetramers, can hetero-oligomerize with WTp53 and can bind to a p53 consensus element. However, the thermal stability of R337C is much lower than that of WTp53 and at physiological temperatures more than half of this mutant is less than tetrameric. Thus, the R337C mutant retains some functional activity yet leads to a predisposition to cancer, suggesting that even partial inactivation of p53 oligomerization is sufficient for accelerated tumour progression.

ei

Web [BibTex]


no image
Nonlinear Component Analysis as a Kernel Eigenvalue Problem

Schölkopf, B., Smola, A., Müller, K.

Neural Computation, 10(5):1299-1319, July 1998 (article)

Abstract
A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
SVMs — a practical consequence of learning theory

Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-21, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Support vector machines

Hearst, M., Dumais, S., Osman, E., Platt, J., Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-28, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
The connection between regularization operators and support vector kernels.

Smola, A., Schölkopf, B., Müller, K.

Neural Networks, 11(4):637-649, June 1998 (article)

Abstract
n this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green‘s Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Prior knowledge in support vector kernels

Schölkopf, B., Simard, P., Smola, A., Vapnik, V.

In Advances in Neural Information Processing Systems 10, pages: 640-646 , (Editors: M Jordan and M Kearns and S Solla ), MIT Press, Cambridge, MA, USA, Eleventh Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
From regularization operators to support vector kernels

Smola, A., Schölkopf, B.

In Advances in Neural Information Processing Systems 10, pages: 343-349, (Editors: M Jordan and M Kearns and S Solla), MIT Press, Cambridge, MA, USA, 11th Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Eine beweistheoretische Anwendung der

Harmeling, S.

Biologische Kybernetik, Westfälische Wilhelms-Universität Münster, Münster, May 1998 (diplomathesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Qualitative Modeling for Data Miner’s Requirements

Shin, H., Jhee, W.

In Proc. of the Korean Management Information Systems, pages: 65-73, Conference on the Korean Management Information Systems, April 1998 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Übersicht durch Übersehen

Schölkopf, B.

Frankfurter Allgemeine Zeitung , Wissenschaftsbeilage, March 1998 (misc)

ei

[BibTex]

[BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H.

Autonomous Robots, 5(1):111-125, March 1998 (article)

Abstract
We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Masking by plaid patterns: effects of presentation time and mask contrast

Wichmann, F., Henning, G.

pages: 115, 1. T{\"u}binger Wahrnehmungskonferenz (TWK 98), February 1998 (poster)

Abstract
Most current models of early spatial vision comprise of sets of orientation- and spatial-frequency selective filters with our without limited non-linear interactions amongst different subsets of the filters. The performance of human observers and of such models for human spatial vision were compared in experiments using maskers with two spatial frequencies (plaid masks). The detectability of horizontally orientated sinusoidal signals at 3.02 c/deg was measured in standard 2AFC-tasks in the presence of plaid patterns with two-components at the same spatial frequency as the signal but at different orientations (+/- 15, 30, 45, and 75 deg from the signal) and with varying contrasts (1.0, 6.25 and 25.0% contrast). In addition, the temporal envelope of the stimulus presentation was either a rectangular pulse of 19.7 msec duration, or a temporal Hanning window of 1497 msec.Threshold elevation varied with plaid component orientation, peaked +/- 30 deg from the signal where nearly a log unit threshold elevation for the 25.0% contrast plaid was observed. For plaids with 1.0% contrast we observed significant facilitation even with plaids whose components were 75 deg from that of the signal. Elevation factors were somewhat lower for the short stimulus presentation time but were still significant (up to a factor of 5 or 6). Despite of the simple nature of the stimuli employed in this study-sinusoidal signal and plaid masks comprised of only two sinusoids-none of the current models of early spatial vision can fully account for all the data gathered.

ei

Web [BibTex]

Web [BibTex]


no image
Qualitative Modeling for Data Miner‘s Requirement

Shin, H.

Biologische Kybernetik, Hong-Ik University, Seoul, Korea, February 1998, Written in Korean (diplomathesis)

ei

ZIP [BibTex]

ZIP [BibTex]


no image
No role for motion blur in either motion detection or motion based image segmentation

Wichmann, F., Henning, G.

Journal of the Optical Society of America A, 15 (2), pages: 297-306, 1998 (article)

Abstract
Determined the influence of high-spatial-frequency losses induced by motion on motion detection and on motion-based image segmentation. Motion detection and motion-based segmentation tasks were performed with either spectrally low-pass or spectrally broadband stimuli. Performance on these tasks was compared with a condition having no motion but in which form differences mimicked the perceptual loss of high spatial frequencies produced by motion. This allowed the relative salience of motion and motion-induced blur to be determined. Neither image segmentation nor motion detection was sensitive to the high-spatial-frequency content of the stimuli. Thus the change in perceptual form produced in moving stimuli is not normally used as a cue either for motion detection or for motion-based image segmentation in ordinary situations.

ei

PDF [BibTex]

PDF [BibTex]


no image
Fast approximation of support vector kernel expansions, and an interpretation of clustering as approximation in feature spaces.

Schölkopf, B., Knirsch, P., Smola, A., Burges, C.

In Mustererkennung 1998, pages: 125-132, Informatik aktuell, (Editors: P Levi and M Schanz and R-J Ahlers and F May), Springer, Berlin, Germany, 20th DAGM-Symposium, 1998 (inproceedings)

Abstract
Kernel-based learning methods provide their solutions as expansions in terms of a kernel. We consider the problem of reducing the computational complexity of evaluating these expansions by approximating them using fewer terms. As a by-product, we point out a connection between clustering and approximation in reproducing kernel Hilbert spaces generated by a particular class of kernels.

ei

Web [BibTex]

Web [BibTex]


no image
Generalization bounds and learning rates for Regularized principal manifolds

Smola, A., Williamson, R., Schölkopf, B.

NeuroCOLT, 1998, NeuroColt2-TR 1998-027 (techreport)

ei

[BibTex]

[BibTex]


no image
PET with 18fluorodeoxyglucose and hexamethylpropylene amine oxime SPECT in late whiplash syndrome

Bicik, I., Radanov, B., Schaefer, N., Dvorak, J., Blum, B., Weber, B., Burger, C., von Schulthess, G., Buck, A.

Neurology, 51, pages: 345-350, 1998 (article)

ei

[BibTex]

[BibTex]


no image
Changes of cerebral blood flow during short-term exposure to normobaric hypoxia

Buck, A., Schirlo, C., Jasinsky, V., Weber, B., Burger, C., von Schulthess, G., Koller, E., Pavlicek, V.

J Cereb Blood Flow Metab, 18, pages: 906-910, 1998 (article)

ei

[BibTex]

[BibTex]


no image
Kernel PCA pattern reconstruction via approximate pre-images.

Schölkopf, B., Mika, S., Smola, A., Rätsch, G., Müller, K.

In 8th International Conference on Artificial Neural Networks, pages: 147-152, Perspectives in Neural Computing, (Editors: L Niklasson and M Boden and T Ziemke), Springer, Berlin, Germany, 8th International Conference on Artificial Neural Networks, 1998 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Generalization Bounds for Convex Combinations of Kernel Functions

Smola, A., Williamson, R., Schölkopf, B.

Royal Holloway College, 1998 (techreport)

ei

[BibTex]

[BibTex]


no image
Generalization Performance of Regularization Networks and Support Vector Machines via Entropy Numbers of Compact Operators

Williamson, R., Smola, A., Schölkopf, B.

(19), NeuroCOLT, 1998, Accepted for publication in IEEE Transactions on Information Theory (techreport)

ei

[BibTex]

[BibTex]


no image
A bootstrap method for testing hypotheses concerning psychometric functions

Hill, N., Wichmann, F.

1998 (poster)

Abstract
Whenever psychometric functions are used to evaluate human performance on some task, it is valuable to examine not only the threshold and slope values estimated from the original data, but also the expected variability in those measures. This allows psychometric functions obtained in two experimental conditions to be compared statistically. We present a method for estimating the variability of thresholds and slopes of psychometric functions. This involves a maximum-likelihood fit to the data using a three-parameter mathematical function, followed by Monte Carlo simulation using the first fit as a generating function for the simulations. The variability of the function's parameters can then be estimated (as shown by Maloney, 1990), as can the variability of the threshold value (Foster & Bischof, 1997). We will show how a simple development of this procedure can be used to test the significance of differences between (a) the thresholds, and (b) the slopes of two psychometric functions. Further, our method can be used to assess the assumptions underlying the original fit, by examining how goodness-of-fit differs in simulation from its original value. In this way data sets can be identified as being either too noisy to be generated by a binomial observer, or significantly "too good to be true." All software is written in MATLAB and is therefore compatible across platforms, with the option of accelerating performance using MATLAB's plug-in binaries, or "MEX" files.

ei

[BibTex]


no image
Quantization Functionals and Regularized PrincipalManifolds

Smola, A., Mika, S., Schölkopf, B.

NeuroCOLT, 1998, NC2-TR-1998-028 (techreport)

ei

[BibTex]

[BibTex]


no image
Support Vector Machines for Image Classification

Chapelle, O.

Biologische Kybernetik, Ecole Normale Superieure de Lyon, 1998 (diplomathesis)

ei

GZIP [BibTex]

GZIP [BibTex]


no image
Support Vector methods in learning and feature extraction

Schölkopf, B., Smola, A., Müller, K., Burges, C., Vapnik, V.

Ninth Australian Conference on Neural Networks, pages: 72-78, (Editors: T. Downs, M. Frean and M. Gallagher), 1998 (talk)

ei

[BibTex]

[BibTex]


no image
Convex Cost Functions for Support Vector Regression

Smola, A., Schölkopf, B., Müller, K.

In 8th International Conference on Artificial Neural Networks, pages: 99-104, Perspectives in Neural Computing, (Editors: L Niklasson and M Boden and T Ziemke), Springer, Berlin, Germany, 8th International Conference on Artificial Neural Networks, 1998 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Support-Vektor-Lernen

Schölkopf, B.

In Ausgezeichnete Informatikdissertationen 1997, pages: 135-150, (Editors: G Hotz and H Fiedler and P Gorny and W Grass and S Hölldobler and IO Kerner and R Reischuk), Teubner Verlag, Stuttgart, 1998 (inbook)

ei

[BibTex]

[BibTex]