In this lecture, we continue to discuss the basic artificial neuron as a generalized linear model for statistical inference. We start with the problem of binary classification from a (single-layer) perceptron (SLP), which can use a threshold activation function to accurately predict membership among two classes so long as those classes are linearly separable. In the lecture, we introduce the geometric interpretation of the classification process as thresholding the level of agreement between the neural-network weight vector and the feature vector. From there, we consider radial basis function neural networks (RBFNN's) which transform the feature space to allow for more sophisticated inferences (e.g., classification for problems that are not linearly separable, function approximation, time-series prediction, etc.). The RBFNN is our first example of a single-hidden-layer neural network, which is our entry point to multi-layer perceptrons (MLP's) that we will discuss next time.
Whiteboard lecture notes can be found at: https://www.dropbox.com/s/dkbm3uw290gol4o/IEE598-Lecture7B-2022-03-29-Introduction_to_Neural_Networks-RBF_MLP_Backpropagation.pdf?dl=0
No comments:
Post a Comment