Perceptrons Revisited

This talk was given during the AAAI 2015 Spring Symposium on Knowledge Representation and Reasoning, whose theme was Integrating Symbolic and Neural Approaches. The idea of the talk is to re-read the classic 1968 book “Perceptrons” of Minsky and Papert in order to (1) isolate what they had to say about the extinction cycles that have affected cybernetics and machine learning in the past, and (2) extrapolate what these insights meant for the new wave of deep learning.

Summary

  1. What's in the book? (the theorems, their meaning, the style, etc.)
  2. Computer science versus cybernetics (the perceptron is not an algorithm, the perceptron is a machine)
  3. Building on the work of others (anticipating foundation models)