Thursday, April 7, 2022

Lecture 7E (2022-04-07): RNNs and Their Training, LSTM, and Reservoir Machines

This lecture continues our introduction of Recurrent Neural Networks (RNNs), starting with a quick refresher on time-delay neural networks (TDNNs). From TDNNs, we discuss basic RNNs and then a process of backpropagation through time (BPTT) that can (in principle) be used to train RNN's in supervised learning tasks. We then discuss how Long Short Term Memory (LSTM) is a regularized RNN structure that is easier to train and has been very successful in many domains, such as Natural Language Processing (NLP). We then pivot to discussing another regularized RNN, the Echo State Machine/Reservoir Machine. Reservoir computing builds a randomized RNN as a kind of encoder that converts temporal signals to spatiotemporal representations that can then be treated as features for an input to a simple feed-forward, single-layer neural network decoder. We then close our discussion of supervised learning with a discussion of training methodology (train, validate, and test) and then open a discussion of reinforcement and unsupervised learning that we will continue next time.

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/pxh77wrxv97um2r/IEE598-Lecture7E-2022-04-07-RNNs_and_their_training-Reservoir_machines-Reinforcement_learning.pdf?dl=0



No comments:

Post a Comment

Popular Posts