Monday, March 30, 2020

Lecture 7B: Introduction to Neural Networks: RBF, MLP, & Backpropagation (2020-03-30)

In this lecture, we continue our introduction to artificial neural networks (ANN). We start with a review of a model of a single neuron can be viewed as a generalized linear model (GLM), and how a single-layer (single-neuron) perceptron (SLP) can be a binary classifier for linearly separable data sets. We then discuss how radial basis function (RBF) neural networks (RBFNN) can be viewed as nonlinear transformations of data that allow for linear separability in the new space. These RBF's combine multiple neurons together, which gives us an opportunity to introduce the multi-layer perceptron (MLP) and closely related variants (like the convolutional neural network, CNN). We discuss the Universal Approximation Theorem (UAT) and what it means for why we bother with deep neural networks. We then start to introduce backpropagation as a way of training MLP's. 

Whiteboard notes for this lecture can be found at:

No comments:

Post a Comment

Popular Posts