Header logo is


2016


Non-parametric Models for Structured Data and Applications to Human Bodies and Natural Scenes
Non-parametric Models for Structured Data and Applications to Human Bodies and Natural Scenes

Lehrmann, A.

ETH Zurich, July 2016 (phdthesis)

Abstract
The purpose of this thesis is the study of non-parametric models for structured data and their fields of application in computer vision. We aim at the development of context-sensitive architectures which are both expressive and efficient. Our focus is on directed graphical models, in particular Bayesian networks, where we combine the flexibility of non-parametric local distributions with the efficiency of a global topology with bounded treewidth. A bound on the treewidth is obtained by either constraining the maximum indegree of the underlying graph structure or by introducing determinism. The non-parametric distributions in the nodes of the graph are given by decision trees or kernel density estimators. The information flow implied by specific network topologies, especially the resultant (conditional) independencies, allows for a natural integration and control of contextual information. We distinguish between three different types of context: static, dynamic, and semantic. In four different approaches we propose models which exhibit varying combinations of these contextual properties and allow modeling of structured data in space, time, and hierarchies derived thereof. The generative character of the presented models enables a direct synthesis of plausible hypotheses. Extensive experiments validate the developed models in two application scenarios which are of particular interest in computer vision: human bodies and natural scenes. In the practical sections of this work we discuss both areas from different angles and show applications of our models to human pose, motion, and segmentation as well as object categorization and localization. Here, we benefit from the availability of modern datasets of unprecedented size and diversity. Comparisons to traditional approaches and state-of-the-art research on the basis of well-established evaluation criteria allows the objective assessment of our contributions.

ps

pdf [BibTex]


no image
Statische und dynamische Magnetisierungseigenschaften nanoskaliger Überstrukturen

Gräfe, J.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2016 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Gepinnte Bahnmomente in magnetischen Heterostrukturen

Audehm, P.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2016 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Austauschgekoppelte Moden in magnetischen Vortexstrukturen

Dieterle, G.

Universität Stuttgart, Stuttgart, 2016 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Density matrix calculations for the ultrafast demagnetization after femtosecond laser pulses

Weng, Weikai

Universität Stuttgart, Stuttgart, 2016 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Deep Learning for Diabetic Retinopathy Diagnostics

Balles, Lukas

Heidelberg University, 2016 (mastersthesis)

[BibTex]

[BibTex]


no image
Helium und Hydrogen Isotope Adsorption and Separation in Metal-Organic Frameworks

Zaiser, Ingrid

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2016 (phdthesis)

mms

[BibTex]

[BibTex]

2006


no image
Semi-Supervised Learning

Chapelle, O., Schölkopf, B., Zien, A.

pages: 508, Adaptive computation and machine learning, MIT Press, Cambridge, MA, USA, September 2006 (book)

Abstract
In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents state-of-the-art algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research. Semi-Supervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or low-density separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the low-density separation assumption, graph-based methods, and algorithms that perform two-step learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semi-supervised learning and transduction.

ei

Web [BibTex]

2006


Web [BibTex]


no image
Kernel PCA for Image Compression

Huhle, B.

Biologische Kybernetik, Eberhard-Karls-Universität, Tübingen, Germany, April 2006 (diplomathesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Gaussian Process Models for Robust Regression, Classification, and Reinforcement Learning

Kuss, M.

Biologische Kybernetik, Technische Universität Darmstadt, Darmstadt, Germany, March 2006, passed with distinction, published online (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Gaussian Processes for Machine Learning

Rasmussen, CE., Williams, CKI.

pages: 248, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, January 2006 (book)

Abstract
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics. The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.

ei

Web [BibTex]

Web [BibTex]


no image
Elektronentheorie der magnetischen EXAFS

Gü\ssmann, M.

Universität Stuttgart, Stuttgart, 2006 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Elektronenspektroskopie an Übergangsmetallclustern

He\ssler, M.

Bayerische Julius-Maximilians-Universität, Würzburg, 2006 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Hydrogen storage by physisorption on porous materials

Panella, B.

Universität Stuttgart, Stuttgart, 2006 (phdthesis)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Theory of magnetic x-ray reflectometry on the Co2Pt7 multilayer system

Martosiswoyo, L.

Universität Stuttgart, Stuttgart, 2006 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Magnetischer zirkularer Röntgendichroismus an Übergangsmetalloxiden

Lafkioti, M.

Universität Stuttgart, Stuttgart, 2006 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Contributions to the theory of x-ray magnetic dichroism

Dörfler, F.

Universität Stuttgart, Stuttgart, 2006 (mastersthesis)

mms

[BibTex]

[BibTex]

2000


no image
Advances in Large Margin Classifiers

Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D.

pages: 422, Neural Information Processing, MIT Press, Cambridge, MA, USA, October 2000 (book)

Abstract
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.

ei

Web [BibTex]

2000


Web [BibTex]


no image
Diffusion von Wasserstoff in Lavesphasen / Diffusion von Wasserstoff in heterogenen Systemen.

Herrmann, A.

Universität Stuttgart, Stuttgart, 2000 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Untersuchung von Magnetisierungsprozessen in dünnen Nd2Fe14B-Schichten

Melsheimer, A.

Universität Stuttgart, Stuttgart, 2000 (phdthesis)

mms

[BibTex]

[BibTex]