Probabilistic Numerics Members Publications

Members

Probabilistic Numerics, Empirical Inference
Affiliated Researcher
Probabilistic Numerics
  • Doctoral Researcher
Probabilistic Numerics
Probabilistic Numerics

Publications

Probabilistic Numerics Conference Paper Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients Balles, L., Hennig, P. Proceedings of the 35th International Conference on Machine Learning (ICML), 80:404-413, Proceedings of Machine Learning Research, (Editors: Jennifer Dy and Andreas Krause), PMLR, ICML, July 2018 (Published)
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.
URL BibTeX

Probabilistic Numerics Article Probabilistic Line Searches for Stochastic Optimization Mahsereci, M., Hennig, P. Journal of Machine Learning Research, 18(119):1-59, November 2017 (Published) URL BibTeX

Probabilistic Numerics Perceiving Systems Conference Paper Coupling Adaptive Batch Sizes with Learning Rates Balles, L., Romero, J., Hennig, P. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), ID 141, (Editors: Gal Elidan, Kristian Kersting, and Alexander T. Ihler), August 2017 (Published)
Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On three image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available.
Code URL BibTeX

Probabilistic Numerics Perceiving Systems Article Early Stopping Without a Validation Set Mahsereci, M., Balles, L., Lassner, C., Hennig, P. arXiv preprint arXiv:1703.09580, 2017
Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization. To find a good point to halt the optimizer, a common practice is to split the dataset into a training and a smaller validation set to obtain an ongoing estimate of the generalization performance. In this paper we propose a novel early stopping criterion which is based on fast-to-compute, local statistics of the computed gradients and entirely removes the need for a held-out validation set. Our experiments show that this is a viable approach in the setting of least-squares and logistic regression as well as neural networks.
URL BibTeX

Empirical Inference Probabilistic Numerics Conference Paper Probabilistic Line Searches for Stochastic Optimization Mahsereci, M., Hennig, P. In Advances in Neural Information Processing Systems 28, 181-189, (Editors: C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama and R. Garnett), Curran Associates, Inc., 29th Annual Conference on Neural Information Processing Systems (NIPS 2015), 2015 (Published)
In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent. [You can find the matlab research code under `attachments' below. The zip-file contains a minimal working example. The docstring in probLineSearch.m contains additional information. A more polished implementation in C++ will be published here at a later point. For comments and questions about the code please write to mmahsereci@tue.mpg.de.]
Matlab research code URL BibTeX

Perceiving Systems Empirical Inference Probabilistic Numerics Article Quasi-Newton Methods: A New Direction Hennig, P., Kiefel, M. Journal of Machine Learning Research, 14(1):843-865, March 2013
Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.
website+code pdf URL BibTeX

Empirical Inference Probabilistic Numerics Conference Paper Fast Probabilistic Optimization from Noisy Gradients Hennig, P. In Proceedings of The 30th International Conference on Machine Learning, JMLR W&CP 28(1), 62–70, (Editors: S Dasgupta and D McAllester), ICML, 2013 PDF BibTeX

Perceiving Systems Empirical Inference Probabilistic Numerics Conference Paper Quasi-Newton Methods: A New Direction Hennig, P., Kiefel, M. In Proceedings of the 29th International Conference on Machine Learning, 25-32, ICML ’12, (Editors: John Langford and Joelle Pineau), Omnipress, New York, NY, USA, ICML, July 2012
Four decades after their invention, quasi- Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.
website+code pdf URL BibTeX