In this lecture, we continue our discussion of unsupervised learning methods with artificial neural networks (ANN's), with reminders of clustering, anomaly detection, and generative adversarial networks (GANs) – all of which represent a form of computational creativity. We then pivot to review the basics of principal component analysis (PCA) as an example method for multi-dimensional scaling (MDS). That allows us to introduce the autoencoder ("self encoder") feed-forward network, which is an example of using methods from supervised learning to generate an unsupervised learning process that implements nonlinear multidimensional scaling. We close the lecture with an introduction to Hebbian/associative learning (and motivations from spike-timing-dependent plasticity, STDP) and how Hebbian update rules can be applied to conventional artificial neural networks for pattern learning.
Whiteboard notes for this lecture can be found at:
No comments:
Post a Comment