Wednesday, April 1, 2020

Lecture 7C: Deep Neural Networks (backpropagation, CNNs, & insect brains) (2020-04-01)

In this lecture, we continue our introduction of supervised learning for feed-forward neural networks. We start with a re-introduction of convolutional neural networks (CNN's) and their loose inspiration from the vertebrate brain's visual cortex and cast them as a special case of multi-layer perceptrons (MLP's). We then continue our explanation of backpropagation as a method of using (stochastic) gradient descent (SGD) to train neural networks in supervised learning. We then pivot to discuss alternative deep neural network topologies potentially inspired by the architecture of other natural systems, such as the insect brain. We introduce the major neuropils of the insect brain (optic lobe, antennal lobe (and glomeruli), lateral horn, mushroom bodies (and Kenyon cells), and central complex) and the various interneurons that connect locally within and project among them. This allows us to portray the apparently random projections between the antennal lobe's glomeruli to the Kenyon cells as a kind of hashing function. The lecture ends with a discussion of the utility of thinking of conventional ANN's as being nature inspired.

Whiteboard notes for this lecture can be found at:

No comments:

Post a Comment

Popular Posts