Perceptrons is a book that gets more interesting when one reads it several times during one's life time. My first reading took place in the late eighties during the second age of neural networks. This was the book that supposedly extinguished perceptron research with brilliant mathematics. Neural network research died again, only to reappear two decades later under the deep learning brand. I decided to read Perceptrons again, not to understand the mathematics, but to understand what Minsky and Papert had understood about these extinctions. This gave me matter for a talk during the 2015 AAAI Spring Symposium on Knowledge Representation and Reasoning. This is probably what caused Marie Lufkin Lee, from MIT Press, asked me to write a foreword for the planned reissue of Perceptrons following the passing of Marvin Minsky in 2016.
Writing the foreword was considerably considerably harder than I expected. Progress came when I realized that whether Perceptrons killed research on perceptrons or not is the wrong question. It is more interesting to find out what Minsky and Papert understood that was missing from perceptron research in the sixties, was missing from neural network research in the eighties, and is probably missing from the more recent deep learning research. Two very smart and very experienced intellectuals had something to say that they believed with great intensity. In order to understand whether their insights still apply to our work, we must first spend the time to listen carefully.
MIT Press link perceptrons-2017.djvu perceptrons-2017.pdf perceptrons-2017.ps.gz
@misc{bottou-foreword-2017, author = {Bottou, L{\'e}on}, title = {Foreword}, howpublished = {\emph{Perceptrons. Reissue of the 1988 expanded edition.} By Marvin L. Minsky and Seymour A. Papert. MIT Press. Cambridge, MA.}, month = {September}, year = {2017}, url = {http://leon.bottou.org/papers/bottou-foreword-2017}, }