Empirical Inference Conference Paper 1999

Kernel PCA and De-noising in feature spaces

PDF Web
Thumb ticker sm l1170153
Empirical Inference
  • Director
no image
Empirical Inference

Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data.

Author(s): Mika, S. and Schölkopf, B. and Smola, AJ. and Müller, K-R. and Scholz, M. and Rätsch, G.
Links:
Book Title: Advances in Neural Information Processing Systems 11
Journal: Advances in Neural Information Processing Systems
Pages: 536-542
Year: 1999
Month: June
Day: 0
Editors: MS Kearns and SA Solla and DA Cohn
Publisher: MIT Press
Bibtex Type: Conference Paper (inproceedings)
Address: Cambridge, MA, USA
Event Name: 12th Annual Conference on Neural Information Processing Systems (NIPS 1998)
Event Place: Denver, CO, USA
Digital: 0
Electronic Archiving: grant_archive
ISBN: 0-262-11245-0
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

BibTex

@inproceedings{806,
  title = {Kernel PCA and De-noising in feature spaces},
  journal = {Advances in Neural Information Processing Systems},
  booktitle = {Advances in Neural Information Processing Systems 11},
  abstract = {Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis.
  This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have
  pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data.},
  pages = {536-542 },
  editors = {MS Kearns and SA Solla and DA Cohn},
  publisher = {MIT Press},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {Cambridge, MA, USA},
  month = jun,
  year = {1999},
  slug = {806},
  author = {Mika, S. and Sch{\"o}lkopf, B. and Smola, AJ. and M{\"u}ller, K-R. and Scholz, M. and R{\"a}tsch, G.},
  month_numeric = {6}
}