User Tools

Site Tools


This is an old revision of the document!


Cold Case: The Lost MNIST Digits

Abstract: Although the popular MNIST dataset [LeCun et al., 1994] is derived from the NIST database [Grother and Hanaoka, 1995], the precise processing steps for this derivation have been lost to time. We propose a reconstruction that is ac- curate enough to serve as a replacement for the MNIST dataset, with insignificant changes in accuracy. We trace each MNIST digit to its NIST source and its rich metadata such as writer identifier, partition identifier, etc. We also reconstruct the complete MNIST test set with 60,000 samples instead of the usual 10,000. Since the balance 50,000 were never distributed, they can be used to investigate the impact of twenty-five years of MNIST experiments on the reported testing performances. Our limited results unambiguously confirm the trends observed by Recht et al. [2018, 2019]: although the misclassification rates are slightly off, classifier ordering and model selection remain broadly reliable. We attribute this phenomenon to the pairing benefits of comparing classifiers on the same digits.

Chhavi Yadav and Léon Bottou: Cold Case: The Lost MNIST Digits, Advances in Neural Information Processing Systems 32, 13443–13452, Edited by H. Wallach, H. Larochelle, A. Beygelzimer, i F. dtextquotesingle Alché-Buc, E. Fox and R. Garnett, Curran Associates, Inc., 2019.

qmnist-2019.djvu qmnist-2019.pdf qmnist-2019.ps.gz   https://arxiv.org/abs/1907.02893

@incollection{yadav-bottou-2019,
  author = {Yadav, Chhavi and Bottou, L{\'{e}}on},
  title = {Cold Case: The Lost {MNIST} Digits},
  booktitle = {Advances in Neural Information Processing Systems 32},
  editor = {Wallach, H. and Larochelle, H. and Beygelzimer, A. and Alch\'{e}-Buc, i F. d\textquotesingle and Fox, E. and Garnett, R.},
  pages = {13443--13452},
  year = {2019},
  publisher = {Curran Associates, Inc.},
  url = {http://leon.bottou.org/papers/yadav-bottou-2019},
}

The final scatterplot

All these results essentially show that the “testing set rot” problem exists but is far less severe than feared. Although the repeated usage of the same testing set impacts absolute performance numbers, it also delivers pairing advantages that help model selection in the long run. In practice, this suggests that a shifting data distribution is far more dangerous than overusing an adequately distributed testing set.

papers/yadav-bottou-2019.1575815893.txt.gz · Last modified: 2019/12/08 09:38 by leonb

Page Tools