In this lecture, we prepare to discuss artificial and spiking neural networks -- bio-inspired information processing mechanisms inspired by the central nervous system and models of learning in psychology. We open with a discussion of the relationship between learning, memory, and neuroplasticity and then introduce a canonical model of a neuron that is the basis of the mechanisms thought to underly neuroplasticity. We discuss the different ways in which neuroplasticity supports working, short-term, and long-term memory. We introduce Hebbian learning (and briefly mention spike-timing-dependent plasticity, STDP) as a foundational learning paradigm that, when combined with neuromodluation and specialized circuits, can implement all forms of learning described in the lecture. Those forms of learning include non-associative learning (habituation and sensitization), associative learning (classical and operant conditioning), and latent learning. We map each of those to machine learing paradigms including unsupervised learning, self-supervised learning/pre-training, reinforcement learning, and supervised learning. In the next lecture, we will directly model the canonical neuron with a signle-layer perceptron and start to build statistical models based on this artificial neuron model. Interactive demonstrations mentioned in this video:
- SLP: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/single_layer_perceptron/slp_explainer.html
- Hebbian Learning: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/hebbian_learning/hebbian_competitive_clustering.html
- Memristor-based STDP Learning: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/memristors/memristor_stdp_array.html
No comments:
Post a Comment