We generalise the Gaussian process (GP) framework for regression by learning a nonlinear transformation of the GP outputs. This allows for non-Gaussian processes and non-Gaussian noise. The learning algorithm chooses a nonlinear transformation such that transformed data is well-modelled by a GP. This can be seen as including a preprocessing transformation as an integral part of the probabilistic modelling problem, rather than as an ad-hoc step. We demonstrate on several real regression problems that learning the transformation can lead to significantly better performance than using a regular GP, or a GP with a fixed transformation.
| Author(s): | Snelson, E. and Rasmussen, CE. and Ghahramani, Z. |
| Links: | |
| Book Title: | Advances in Neural Information Processing Systems 16 |
| Journal: | Advances in Neural Information Processing Systems 16 |
| Pages: | 337-344 |
| Year: | 2004 |
| Month: | June |
| Day: | 0 |
| Editors: | Thrun, S., L.K. Saul, B. Sch{\"o}lkopf |
| Publisher: | MIT Press |
| BibTeX Type: | Conference Paper (inproceedings) |
| Address: | Cambridge, MA, USA |
| Event Name: | Seventeenth Annual Conference on Neural Information Processing Systems (NIPS 2003) |
| Event Place: | Vancouver, BC, Canada |
| Digital: | 0 |
| Electronic Archiving: | grant_archive |
| ISBN: | 0-262-20152-6 |
| Organization: | Max-Planck-Gesellschaft |
| School: | Biologische Kybernetik |
BibTeX
@inproceedings{2298,
title = {Warped Gaussian Processes},
journal = {Advances in Neural Information Processing Systems 16},
booktitle = {Advances in Neural Information Processing Systems 16},
abstract = {We generalise the Gaussian process (GP) framework for regression by learning a nonlinear transformation of the GP outputs. This allows for non-Gaussian processes and non-Gaussian noise. The learning algorithm chooses a nonlinear transformation such that transformed data is
well-modelled by a GP. This can be seen as including a preprocessing transformation as an integral part of the probabilistic modelling problem, rather than as an ad-hoc step. We demonstrate on several real regression problems that learning the transformation can lead to significantly better performance than using a regular GP, or a GP with
a fixed transformation.},
pages = {337-344},
editors = {Thrun, S., L.K. Saul, B. Sch{\"o}lkopf},
publisher = {MIT Press},
organization = {Max-Planck-Gesellschaft},
school = {Biologische Kybernetik},
address = {Cambridge, MA, USA},
month = jun,
year = {2004},
author = {Snelson, E. and Rasmussen, CE. and Ghahramani, Z.},
month_numeric = {6}
}