Monday, April 20, 2020

Lecture 7H: Associative Learning, Spiking Neural Networks, and Alternatives to Backpropagation (2020-04-20)

In this lecture, we start with a review of this unit on Artificial Neural Networks, starting with the original inspiration from Otto Schmitt's work on the squid neuron (inspiring switches with hysteresis, like the Schmitt trigger). We revisit the basic model of an artificial neural as a generalized linear model (GLM) for regression and classification and discuss how the receptive fields and input layers of radial basis function neural networks (RBFNN)'s transform data so that a neuron built for linear separability is able to perform nonlinear classification as well. That lets us pivot into revisiting deep neural networks (such as the multi-layer perceptron, MLP) and forms like the convolutional neural network (CNN) that also make use of receptive fields to simplify the downstream learning. We then remind ourselves of the recurrent neural network (RNN) and more exotic versions like reservoir machines (echo state networks, ESNs) that use randomly connected recurrent neural networks as filters that allow downstream neurons to learn to classify patterns in time (not just space). After being reminded of backpropagation for the training of all of these networks, we discuss how associative learning (specifically Hebbian learning and spike-timing-dependent plasticity, STDP) can be viewed as an alternative to backpropagation for training. This requires us to discuss how psychologists view associative learning as able to generate both classical and operant conditioning, which can then be used to implement supervised, unsupervised, and reinforcement learning. This lets us discuss a new result from Google, AutoML-Zero, that evolves (using genetic programming, GP, genprog) novel ML algorithms that themselves instantiate new neural networks. AutoML-Zero has "discovered" backpropagation on its own, but it has also found other ways to train weights. We end with a discussion of the hardware implementation from Boyn et al. (2017, Nature Communications) of a crossbar array of memristors that can do unsupervised pattern recognition through STDP (no backpropagation required).


Whiteboard notes for this lecture can be found at:

No comments:

Post a Comment

Popular Posts