Header logo is


2007


no image
A time/frequency decomposition of information transmission by LFPs and spikes in the primary visual cortex

Belitski, A., Gretton, A., Magri, C., Murayama, Y., Montemurro, M., Logothetis, N., Panzeri, S.

37th Annual Meeting of the Society for Neuroscience (Neuroscience 2007), 37, pages: 1, November 2007 (poster)

ei

Web [BibTex]

2007


Web [BibTex]


no image
Mining expression-dependent modules in the human interaction network

Georgii, E., Dietmann, S., Uno, T., Pagel, P., Tsuda, K.

BMC Bioinformatics, 8(Suppl. 8):S4, November 2007 (poster)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Hilbert Space Embedding for Distributions

Smola, A., Gretton, A., Song, L., Schölkopf, B.

Proceedings of the 10th International Conference on Discovery Science (DS 2007), 10, pages: 40-41, October 2007 (poster)

Abstract
While kernel methods are the basis of many popular techniques in supervised learning, they are less commonly used in testing, estimation, and analysis of probability distributions, where information theoretic approaches rule the roost. However it becomes difficult to estimate mutual information or entropy if the data are high dimensional.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Predicting Structured Data

Bakir, G., Hofmann, T., Schölkopf, B., Smola, A., Taskar, B., Vishwanathan, S.

pages: 360, Advances in neural information processing systems, MIT Press, Cambridge, MA, USA, September 2007 (book)

Abstract
Machine learning develops intelligent computer systems that are able to generalize from previously seen examples. A new domain of machine learning, in which the prediction must satisfy the additional constraints found in structured data, poses one of machine learning’s greatest challenges: learning functional dependencies between arbitrary input and output domains. This volume presents and analyzes the state of the art in machine learning algorithms and theory in this novel field. The contributors discuss applications as diverse as machine translation, document markup, computational biology, and information extraction, among others, providing a timely overview of an exciting field.

ei

Web [BibTex]

Web [BibTex]


no image
Studying the effects of noise correlations on population coding using a sampling method

Ecker, A., Berens, P., Bethge, M., Logothetis, N., Tolias, A.

Neural Coding, Computation and Dynamics (NCCD 07), 1, pages: 21, September 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference

Schölkopf, B., Platt, J., Hofmann, T.

Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems (NIPS 2006), pages: 1690, MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (proceedings)

Abstract
The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation and machine learning. It draws a diverse group of attendees--physicists, neuroscientists, mathematicians, statisticians, and computer scientists--interested in theoretical and applied aspects of modeling, simulating, and building neural-like or intelligent systems. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December 2006 meeting, held in Vancouver.

ei

Web [BibTex]

Web [BibTex]


no image
Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

Berens, P., Bethge, M.

Neural Coding, Computation and Dynamics (NCCD 07), 1, pages: 19, September 2007 (poster)

Abstract
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data---the model parameters can be derived in closed form and sampling is easy. We demonstrate its usefulness by studying a simple neural representation model of natural images. For the first time, we are able to directly compare predictions from a pairwise maximum entropy model not only in small groups of neurons, but also in larger populations of more than thousand units. Our results indicate that in such larger networks interactions exist that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics extrem ely well up to the limit of dimensionality where estimation of the full joint distribution is feasible.

ei

PDF [BibTex]

PDF [BibTex]


no image
Error Correcting Codes for the P300 Visual Speller

Biessmann, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, July 2007 (diplomathesis)

Abstract
The aim of brain-computer interface (BCI) research is to establish a communication system based on intentional modulation of brain activity. This is accomplished by classifying patterns of brain ac- tivity, volitionally induced by the user. The BCI presented in this study is based on a classical paradigm as proposed by (Farwell and Donchin, 1988), the P300 visual speller. Recording electroencephalo- grams (EEG) from the scalp while presenting letters successively to the user, the speller can infer from the brain signal which letter the user was focussing on. Since EEG recordings are noisy, usually many repetitions are needed to detect the correct letter. The focus of this study was to improve the accuracy of the visual speller applying some basic principles from information theory: Stimulus sequences of the speller have been modified into error-correcting codes. Additionally a language model was incorporated into the probabilistic letter de- coder. Classification of single EEG epochs was less accurate using error correcting codes. However, the novel code could compensate for that such that overall, letter accuracies were as high as or even higher than for classical stimulus codes. In particular at high noise levels, error-correcting decoding achieved higher letter accuracies.

ei

PDF [BibTex]

PDF [BibTex]


no image
Learning the Influence of Spatio-Temporal Variations in Local Image Structure on Visual Saliency

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

10th T{\"u}binger Wahrnehmungskonferenz (TWK 2007), 10, pages: 1, July 2007 (poster)

Abstract
Computational models for bottom-up visual attention traditionally consist of a bank of Gabor-like or Difference-of-Gaussians filters and a nonlinear combination scheme which combines the filter responses into a real-valued saliency measure [1]. Recently it was shown that a standard machine learning algorithm can be used to derive a saliency model from human eye movement data with a very small number of additional assumptions. The learned model is much simpler than previous models, but nevertheless has state-of-the-art prediction performance [2]. A central result from this study is that DoG-like center-surround filters emerge as the unique solution to optimizing the predictivity of the model. Here we extend the learning method to the temporal domain. While the previous model [2] predicts visual saliency based on local pixel intensities in a static image, our model also takes into account temporal intensity variations. We find that the learned model responds strongly to temporal intensity changes ocurring 200-250ms before a saccade is initiated. This delay coincides with the typical saccadic latencies, indicating that the learning algorithm has extracted a meaningful statistic from the training data. In addition, we show that the model correctly predicts a significant proportion of human eye movements on previously unseen test data.

ei

Web [BibTex]

Web [BibTex]


no image
Nonparametric Bayesian Discrete Latent Variable Models for Unsupervised Learning

Görür, D.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, April 2007, published online (phdthesis)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Better Codes for the P300 Visual Speller

Biessmann, F., Hill, N., Farquhar, J., Schölkopf, B.

G{\"o}ttingen Meeting of the German Neuroscience Society, 7, pages: 123, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Do We Know What the Early Visual System Computes?

Bethge, M., Kayser, C.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 352, March 2007 (poster)

Abstract
Decades of research provided much data and insights into the mechanisms of the early visual system. Currently, however, there is great controversy on whether these findings can provide us with a thorough functional understanding of what the early visual system does, or formulated differently, of what it computes. At the Society for Neuroscience meeting 2005 in Washington, a symposium was held on the question "Do we know that the early visual system does", which was accompanied by a widely regarded publication in the Journal of Neuroscience. Yet, that discussion was rather specialized as it predominantly addressed the question of how well neural responses in retina, LGN, and cortex can be predicted from noise stimuli, but did not emphasize the question of whether we understand what the function of these early visual areas is. Here we will concentrate on this neuro-computational aspect of vision. Experts from neurobiology, psychophysics and computational neuroscience will present studies which approach this question from different viewpoints and promote a critical discussion of whether we actually understand what early areas contribute to the processing and perception of visual information.

ei

PDF [BibTex]

PDF [BibTex]


no image
Implicit Wiener Series for Estimating Nonlinear Receptive Fields

Franz, MO., Macke, JH., Saleem, A., Schultz, SR.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 1199, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
3D Reconstruction of Neural Circuits from Serial EM Images

Maack, N., Kapfer, C., Macke, J., Schölkopf, B., Denk, W., Borst, A.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 1195, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Identifying temporal population codes in the retina using canonical correlation analysis

Bethge, M., Macke, J., Gerwinn, S., Zeck, G.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 359, March 2007 (poster)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Bayesian Neural System identification: error bars, receptive fields and neural couplings

Gerwinn, S., Seeger, M., Zeck, G., Bethge, M.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 360, March 2007 (poster)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Applications of Kernel Machines to Structured Data

Eichhorn, J.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, March 2007, passed with "sehr gut", published online (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
A priori Knowledge from Non-Examples

Sinz, FH.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, March 2007 (diplomathesis)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
About the Triangle Inequality in Perceptual Spaces

Jäkel, F., Schölkopf, B., Wichmann, F.

Proceedings of the Computational and Systems Neuroscience Meeting 2007 (COSYNE), 4, pages: 308, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Center-surround filters emerge from optimizing predictivity in a free-viewing task

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

Proceedings of the Computational and Systems Neuroscience Meeting 2007 (COSYNE), 4, pages: 207, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Nonlinear Receptive Field Analysis: Making Kernel Methods Interpretable

Kienzle, W., Macke, J., Wichmann, F., Schölkopf, B., Franz, M.

Computational and Systems Neuroscience Meeting 2007 (COSYNE 2007), 4, pages: 16, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Estimating Population Receptive Fields in Space and Time

Macke, J., Zeck, G., Bethge, M.

Computational and Systems Neuroscience Meeting 2007 (COSYNE 2007), 4, pages: 44, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Machine Learning for Mass Production and Industrial Engineering

Pfingsten, T.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, February 2007 (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Development of a Brain-Computer Interface Approach Based on Covert Attention to Tactile Stimuli

Raths, C.

University of Tübingen, Germany, University of Tübingen, Germany, January 2007 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
A Machine Learning Approach for Estimating the Attenuation Map for a Combined PET/MR Scanner

Hofmann, M.

Biologische Kybernetik, Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, 2007 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
On the theory of magnetization dynamics of non-collinear spin systems in the s-d model

De Angeli, L.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Zur ab-initio Elektronentheorie des Magnetismus bei endlichen Temperaturen

Dietermann, F.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Röntgenzirkulardichroische Untersuchungen an ferromagnetischen verdünnten Halbleitersystemen

Tietze, T.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Low-dimensional Fe on vicinal Ir(997): Growth and magnetic properties

Kawwam, M.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Micromagnetic simulations of switching processes and the role of thermal fluctuations

Macke, S.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Physisorption von Wasserstoff in neuen Materialien mit gro\sser spezifischer Oberfläche

Schmitz, B.

Universität Bonn, Bonn, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Towards spin injection into silicon

Dash, S. P.

Universität Stuttgart, Stuttgart, 2007 (phdthesis)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Bestimmung der kritischen Schichtdicken ferromagnetischer Plättchen für Eindomänenverhalten

Soehnle, S.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Zeitaufgelöste Röntgenmikroskopie an magnetischen Mikrostrukturen

Puzic, A.

Universität Stuttgart, Stuttgart, 2007 (phdthesis)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Vortex dynamics studied by time-resolved X-ray microscopy

Chou, K. W.

Universität Stuttgart, Stuttgart, 2007 (phdthesis)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Resonante magnetische Reflektometrie an Ferromagnet/Paramagnet Heterostrukturen

Ferreras Paz, V.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Herstellung und Charakterisierung dünner Niob-Schichten auf verschiedenen Substraten

Mayer, M. W. R.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Formation of hard magnetic L10-FePt/FePd monolayers from elemental multilayers

Goo, N. H.

Universität Stuttgart, Stuttgart, 2007 (phdthesis)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Zur ab-initio Elektronentheorie stark nichtkollinearer Spinsysteme

Köberle, I.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Theorie der Kernspektroskopie mit zirkular polarisierter Gammastrahlung

Engelhart, W.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Untersuchung der Adsorption von Wasserstoff in porösen Materialien

Hönes, K.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Untersuchung der mechanischen Eigenschaften dünner Chromschichten

Jüllig, P.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]

2004


no image
S-cones contribute to flicker brightness in human vision

Wehrhahn, C., Hill, NJ., Dillenburger, B.

34(174.12), 34th Annual Meeting of the Society for Neuroscience (Neuroscience), October 2004 (poster)

Abstract
In the retina of primates three cone types sensitive to short, middle and long wavelengths of light convert photons into electrical signals. Many investigators have presented evidence that, in color normal observers, the signals of cones sensitive to short wavelengths of light (S-cones) do not contribute to the perception of brightness of a colored surface when this is alternated with an achromatic reference (flicker brightness). Other studies indicate that humans do use S-cone signals when performing this task. Common to all these studies is the small number of observers, whose performance data are reported. Considerable variability in the occurrence of cone types across observers has been found, but, to our knowledge, no cone counts exist from larger populations of humans. We reinvestigated how much the S-cones contribute to flicker brightness. 76 color normal observers were tested in a simple psychophysical procedure neutral to the cone type occurence (Teufel & Wehrhahn (2000), JOSA A 17: 994 - 1006). The data show that, in the majority of our observers, S-cones provide input with a negative sign - relative to L- and M-cone contribution - in the task in question. There is indeed considerable between-subject variability such that for 20 out of 76 observers the magnitude of this input does not differ significantly from 0. Finally, we argue that the sign of S-cone contribution to flicker brightness perception by an observer cannot be used to infer the relative sign their contributions to the neuronal signals carrying the information leading to the perception of flicker brightness. We conclude that studies which use only a small number of observers may easily fail to find significant evidence for the small but significant population tendency for the S-cones to contribute to flicker brightness. Our results confirm all earlier results and reconcile their contradictory interpretations.

ei

Web [BibTex]

2004


Web [BibTex]


no image
Advanced Lectures on Machine Learning

Bousquet, O., von Luxburg, U., Rätsch, G.

ML Summer Schools 2003, LNAI 3176, pages: 240, Springer, Berlin, Germany, ML Summer Schools, September 2004 (proceedings)

Abstract
Machine Learning has become a key enabling technology for many engineering applications, investigating scientific questions and theoretical problems alike. To stimulate discussions and to disseminate new results, a summer school series was started in February 2002, the documentation of which is published as LNAI 2600. This book presents revised lectures of two subsequent summer schools held in 2003 in Canberra, Australia, and in T{\"u}bingen, Germany. The tutorial lectures included are devoted to statistical learning theory, unsupervised learning, Bayesian inference, and applications in pattern recognition; they provide in-depth overviews of exciting new developments and contain a large number of references. Graduate students, lecturers, researchers and professionals alike will find this book a useful resource in learning and teaching machine learning.

ei

Web [BibTex]

Web [BibTex]


no image
Pattern Recognition: 26th DAGM Symposium, LNCS, Vol. 3175

Rasmussen, C., Bülthoff, H., Giese, M., Schölkopf, B.

Proceedings of the 26th Pattern Recognition Symposium (DAGM‘04), pages: 581, Springer, Berlin, Germany, 26th Pattern Recognition Symposium, August 2004 (proceedings)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Kernel Methods in Computational Biology

Schölkopf, B., Tsuda, K., Vert, J.

pages: 410, Computational Molecular Biology, MIT Press, Cambridge, MA, USA, August 2004 (book)

Abstract
Modern machine learning techniques are proving to be extremely valuable for the analysis of data in computational biology problems. One branch of machine learning, kernel methods, lends itself particularly well to the difficult aspects of biological data, which include high dimensionality (as in microarray measurements), representation as discrete and structured data (as in DNA or amino acid sequences), and the need to combine heterogeneous sources of information. This book provides a detailed overview of current research in kernel methods and their applications to computational biology. Following three introductory chapters—an introduction to molecular and computational biology, a short review of kernel methods that focuses on intuitive concepts rather than technical details, and a detailed survey of recent applications of kernel methods in computational biology—the book is divided into three sections that reflect three general trends in current research. The first part presents different ideas for the design of kernel functions specifically adapted to various biological data; the second part covers different approaches to learning from heterogeneous data; and the third part offers examples of successful applications of support vector machine methods.

ei

Web [BibTex]

Web [BibTex]


no image
Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference

Thrun, S., Saul, L., Schölkopf, B.

Proceedings of the Seventeenth Annual Conference on Neural Information Processing Systems (NIPS 2003), pages: 1621, MIT Press, Cambridge, MA, USA, 17th Annual Conference on Neural Information Processing Systems (NIPS), June 2004 (proceedings)

Abstract
The annual Neural Information Processing (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only thirty percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains all the papers presented at the 2003 conference.

ei

Web [BibTex]

Web [BibTex]


no image
Human Classification Behaviour Revisited by Machine Learning

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

7, pages: 134, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), Febuary 2004 (poster)

Abstract
We attempt to understand visual classication in humans using both psychophysical and machine learning techniques. Frontal views of human faces were used for a gender classication task. Human subjects classied the faces and their gender judgment, reaction time (RT) and condence rating (CR) were recorded for each face. RTs are longer for incorrect answers than for correct ones, high CRs are correlated with low classication errors and RTs decrease as the CRs increase. This results suggest that patterns difcult to classify need more computation by the brain than patterns easy to classify. Hyperplane learning algorithms such as Support Vector Machines (SVM), Relevance Vector Machines (RVM), Prototype learners (Prot) and K-means learners (Kmean) were used on the same classication task using the Principal Components of the texture and oweld representation of the faces. The classication performance of the learning algorithms was estimated using the face database with the true gender of the faces as labels, and also with the gender estimated by the subjects. Kmean yield a classication performance close to humans while SVM and RVM are much better. This surprising behaviour may be due to the fact that humans are trained on real faces during their lifetime while they were here tested on articial ones, while the algorithms were trained and tested on the same set of stimuli. We then correlated the human responses to the distance of the stimuli to the separating hyperplane (SH) of the learning algorithms. On the whole stimuli far from the SH are classied more accurately, faster and with higher condence than those near to the SH if we pool data across all our subjects and stimuli. We also nd three noteworthy results. First, SVMs and RVMs can learn to classify faces using the subjects' labels but perform much better when using the true labels. Second, correlating the average response of humans (classication error, RT or CR) with the distance to the SH on a face-by-face basis using Spearman's rank correlation coefcients shows that RVMs recreate human performance most closely in every respect. Third, the mean-of-class prototype, its popularity in neuroscience notwithstanding, is the least human-like classier in all cases examined.

ei

Web [BibTex]

Web [BibTex]


no image
m-Alternative-Forced-Choice: Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F., Hill, J., Wichmann, F.

7, pages: 118, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
We explored several ways to improve the efficiency of measuring psychometric functions without resorting to adaptive procedures. a) The number m of alternatives in an m-alternative-forced-choice (m-AFC) task improves the efficiency of the method of constant stimuli. b) When alternatives are presented simultaneously on different positions on a screen rather than sequentially time can be saved and memory load for the subject can be reduced. c) A touch-screen can further help to make the experimental procedure more intuitive. We tested these ideas in the measurement of contrast sensitivity and compared them to results obtained by sequential presentation in two-interval-forced-choice (2-IFC). Qualitatively all methods (m-AFC and 2-IFC) recovered the characterictic shape of the contrast sensitivity function in three subjects. The m-AFC paradigm only took about 60% of the time of the 2-IFC task. We tried m=2,4,8 and found 4-AFC to give the best model fits and 2-AFC to have the least bias.

ei

Web [BibTex]

Web [BibTex]


no image
Efficient Approximations for Support Vector Classifiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

ei

Web [BibTex]

Web [BibTex]