Thursday, April 10, 2025

Lecture 7B (2025-04-10): Feeding Forward from Neurons to Networks

In this lecture, we present the foundations of supervised learning of feedforward neural networks, starting with the original inspiration from basic models of the neuron. We start with a review of the activation of a basic neuron and then map that to the Single Layer Perceptron (SLP), which we describe as a tool for binary classification of linearly separable data. Then, to extend these capabilities to data sets that are not linearly separable, we introduce Radial Basis Function Neural Networks (RBFNNs), whose Receptive Field Units (RFUs) act as a hidden layer that allow the RBFNN to do much more than the SLP. Thus, the RBFNN is our first example of a single-hidden-layer neural network. This gives us an opportunity to discuss Universal Approximation Theorems (UAT's) which help to explain why the RBFNN is so much more capable than the SLP. Despite its strengths, the RBFNN is not convenient to train. So, guided by UAT, we introduce the Multi-Layer Perceptron (MLP), which is a generalized version of the SLP that includes a hidden layer whose non-linear activation functions allow for universal approximation. We discuss how backpropagation can be used to train MLP's efficiently so long as activation functions are differentiable. We close with an introduction to Convolutional Neural Networks (CNNs) as an MLP that implements receptive fields very similar to RBFNN but in a way that is more flexible/trainable than the RBFNN. Next time, we will start discussing recurrent neural networks, their connection to biology, and how to train them.

Whiteboard notes for this lecture are available at:
https://www.dropbox.com/scl/fi/t23j4gupde7zyue83guz3/IEE598-Lecture7B-2025-04-10-Feeding_Forward_from_Neurons_to_Networks-Notes.pdf?rlkey=vpyd1htoswq54reb892clgch4&dl=0



No comments:

Post a Comment

Popular Posts