First-Order Adversarial Vulnerability of Neural Networks and Input Dimension

Abstract: Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: for many standard network architectures, we prove that at initialization, the $\ell_1$-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension dependence persists after either usual or robust training, but gets attenuated with higher regularization.

Carl-Johann Simon-Gabriel, Yann Ollivier, Leon Bottou, Bernhard Schölkopf and David Lopez-Paz: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension, Proceedings of the 36th International Conference on Machine Learning, 97:5809–5817, Edited by Kamalika Chaudhuri and Ruslan Salakhutdinov, Proceedings of Machine Learning Research, Long Beach, California, USA, 2019.

icml-2019a.djvu icml-2019a.pdf icml-2019a.ps.gz

@inproceedings{simongabriel-2019,
  title = {First-Order Adversarial Vulnerability of Neural Networks and Input Dimension},
  author = {Simon-Gabriel, Carl-Johann and Ollivier, Yann and Bottou, Leon and Sch{\"o}lkopf, Bernhard and Lopez-Paz, David},
  booktitle = {Proceedings of the 36th International Conference on Machine Learning},
  pages = {5809--5817},
  year = {2019},
  editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
  volume = {97},
  series = {Proceedings of Machine Learning Research},
  address = {Long Beach, California, USA},
  url = {http://leon.bottou.org/papers/simongabriel-2019},
}