User Tools

Site Tools


Diagonal Rescaling For Neural Networks

Abstract: We define a second-order neural network stochastic gradient training algorithm whose block-diagonal structure effectively amounts to normalizing the unit activations. Investigating why this algorithm lacks in robustness then reveals two interesting insights. The first insight suggests a new way to scale the stepsizes, clarifying popular algorithms such as RMSProp as well as old neural network tricks such as fanin stepsize scaling. The second insight stresses the practical importance of dealing with fast changes of the curvature of the cost.

Jean Lafond, Nicolas Vasilache and Léon Bottou: Diagonal Rescaling For Neural Networks, arXiV:1705.09319, 2017.

arXiV link   tr-diag-2017.djvu tr-diag-2017.pdf tr-diag-2017.ps.gz

@techreport{lafond-vasilache-bottou-2017,
  author = {Lafond, Jean and Vasilache, Nicolas and Bottou, L\'{e}on},
  title = {Diagonal Rescaling For Neural Networks},
  institution = {arXiV:1705.09319},
  year = {2017},
  url = {http://leon.bottou.org/papers/lafond-vasilache-bottou-2017},
}
papers/lafond-vasilache-bottou-2017.txt · Last modified: 2017/06/22 16:43 by leonb

Page Tools