User Tools

Site Tools


This is an old revision of the document!


SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent

Abstract: The SGDQN algorithm is a stochastic gradient descent algorithm that makes careful use of second-order information and splits the parameter update into independently scheduled components. Thanks to this design, SGDQN iterates nearly as fast as a first-order stochastic gradient descent but requires less iterations to achieve the same accuracy. This algorithm won the “Wild Track” of the first PASCAL Large Scale Learning Challenge.

Note: The appendix contains a derivation of upper and lower bounds on the asymptotic convergence speed of stochastic gradient algorithm. This result is exact in the case of second order stochastic gradient.

Antoine Bordes, Léon Bottou and Patrick Gallinari: SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent, Journal of Machine Learning Research, 10:1737–1754, July 2009.

JMLR Link jmlr-2009.djvu jmlr-2009.pdf jmlr-2009.ps.gz

@article{bordes-bottou-gallinari-2009,
  author = {Bordes, Antoine and Bottou, L\'{e}on and Gallinari, Patrick},
  title = {SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent},
  journal = {Journal of Machine Learning Research},
  year = {2009},
  volume = {10},
  pages = {1737--1754},
  month = {July},
  url = {http://leon.bottou.org/papers/bordes-bottou-gallinari-2009},
}
papers/bordes-bottou-gallinari-2009.1249654745.txt.gz · Last modified: 2009/08/07 10:19 by leonb

Page Tools