This lecture starts with a basic psychological and neurophysiological introduction to learning -- non-associative and associative. Most of the focus is on associative learning, broken into classical conditioning (Pavlov) and operant conditioning (Skinner). Psychology–Machine-Learning Analogies are made between classical conditioning and unsupervised learning as well as between operant conditioning and supervised and reinforcement learning. In principle, with the right hardware, a mechanism for associative learning can underly all other learning frameworks within machine learning. With that in mind, spike-timing-dependent plasticity (STDP) is introduced as a neuronal mechanism for associative learning. In particular, we introduce Hebbian learning ("fire together, wire together") in the spiking sense and then conceptualize it in the neuronal weights case (going from temporal coding to spatial/value coding). We then discuss a simple unsupervised pattern recognition/classification example using Hebbian updating on the weights. We then pivot to introducing true spiking neural networks (SNNs) and modern versions, such as SpiNNaker, IBM TrueNorth, and Intel Loihi. Next time, we will end our unit on Artificial Neural Networks/Spiking Neural Networks with a discussion of an example of an analog SNN (built with memristors) that does unsupervised pattern recognition as well as some more advanced directions with SNNs.
Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/kefruibf8vqe2u0/IEE598-Lecture7G-2022-04-14-Decentralized_Associative_Hebbian_Learning_and_Intro_to_SNN_and_Neuromorphic_Computing.pdf?dl=0
No comments:
Post a Comment