Tuesday, April 14, 2026

Lecture 7A (2026-04-09): Neural Foundations of Learning

In this lecture, we prepare to discuss artificial and spiking neural networks -- bio-inspired information processing mechanisms inspired by the central nervous system and models of learning in psychology. We open with a discussion of the relationship between learning, memory, and neuroplasticity and then introduce a canonical model of a neuron that is the basis of the mechanisms thought to underly neuroplasticity. We discuss the different ways in which neuroplasticity supports working, short-term, and long-term memory. We introduce Hebbian learning (and briefly mention spike-timing-dependent plasticity, STDP) as a foundational learning paradigm that, when combined with neuromodluation and specialized circuits, can implement all forms of learning described in the lecture. Those forms of learning include non-associative learning (habituation and sensitization), associative learning (classical and operant conditioning), and latent learning. We map each of those to machine learing paradigms including unsupervised learning, self-supervised learning/pre-training, reinforcement learning, and supervised learning. In the next lecture, we will directly model the canonical neuron with a signle-layer perceptron and start to build statistical models based on this artificial neuron model. Interactive demonstrations mentioned in this video:

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/x4t0y6q9rblrn78o8ns2r/IEE598-Lecture7A-2026-04-09-Neural_Foundations_of_Learning-Notes.pdf?rlkey=im6unlrptbfppqeds2y9gpga7&dl=0



No comments:

Post a Comment

Popular Posts