Thursday, April 30, 2026

Lecture 8A (2026-04-30): Complex Systems Models of Computation – Cellular Automata and Neighbors

In this (bonus) lecture, we discuss distributed, spatially explicit models of computation that come from the complex systems community. We start with a brief introduction to interacting particle systems (IPS), with a specific focus on the voter model. The voter model is simultaneously a model of neutral evolution (genetic drift leading to fixation) and a basic model of consensus/agreement in opinion dynamics. We discuss the voter model in 1, 2, and 3+ dimensions. To analyze this case, we introduce a dual model of the voter model that focuses on "contact tracing" of opinion provenance, which leads to a time-reversed set of coalescing Markov chains. From this perspective, studying the probability of consensus is equivalent to studying the probability of Markov chains intersecting (Polya's theorem). This implies that while 1D and 2D voter models are guaranteed to come to consensus, this cannot be said of 3D or higher. After this result, we pivot to introducing cellular automata, and specifically 1D elementary cellular automata (ECA). We discuss how ECA's are named and operate, we highlight several key ECA rules and their properties, and we close by using lessons learned from ECA's to connect back to niching methods for GA's we introduced in our first unit. Interactive demonstrations referenced in this lecture can be found at:

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/q9z3tqmv1wq57ki3kg1q5/IEE598-Lecture8A-2026-04-30-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=x4zwop6e7swkdugkejyx7s1st&dl=0



Lecture 7G (2026-04-30): Spiking Neural Networks and Neuromorphic Computing

In this lecture, we introduce spiking neural networks and neuromorphic computing, starting with a refresher of the biological neuron and an introduction to Carver Mead, one of the founders of modern neurlmorphic computing. We discuss the Leaky Integrate and Fire (LIF) model for a spiking neuron and spike-timing dependent plasticity (STDP) for (unsupervised) learning of these neurons (temporary/working memory). We focus on rate coding and show examples of rate coded signals as inputs and outputs from LIF neurons. We introduce SNN implementations from SpiNNaker to IBM TrueNorth to Intel Loihi and a crossbar array memristor example published in 2017 that shows unsupervised STDP learning. We then pivot to show that Hebbian updating in traditional ANN's can also perform this task (albeit possibly not as efficient as an SNN implementation). We close with some comments about the possible future of SNN's. Interactive widgets referenced in this lecture can be found at:

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7fpnrrriu4ez0sbfneyhm/IEE598-Lecture7G-2026-04-30-Spiking_Neural_networks_and_Neuromorphic_Computing-Notes.pdf?rlkey=9mdotvp12ka5g9j4dzoi9qloi&dl=0





Tuesday, April 28, 2026

Lecture 7F (2026-04-28): Predictive Coding, Latent Learning, and Self-Supervised Learning

In this lecture, we pivot from our discussion of the autoencoder as an example of unsupervised learning to an introduction to predictive coding, latent learning, and ultimately self-supervised learning (like pre-trained transformers including BERT and GPT). A key historical example described is the case of Tolman's rats and their "latent learning" of a "cognitive map" that allowed them to more quickly learn the location of a reward when presented in a later trial. We connect this with modern pre-training of large language models (LLM's) that gives them the ability to make later inferences that benefit from long-range relationships they learned (by way of complex attention heads) without retraining. We close with some remarks about large multimodal models and their connection with embedding spaces like CLIP (which we introduced earlier as we transitioned from the opening example of the autoencoder). Interactive widgets mentioned/used in this lecture can be found at:

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/pihkdryix5e4ynqy5zotx/IEE598-Lecture7F-2026-04-28-Predictive_Coding_Latent_Learning_and_Self_Supervised_Learning-Notes.pdf?rlkey=b0ejd4usvqn4fpievga9sbl4m&dl=0



Thursday, April 23, 2026

Lecture 7E (2026-04-23): Natural Learning Experiences – Reinforcement and Unsupervised Learning

In this lecture, we introduce Temporal Difference (TD) Q-learning and Deep Q Networks, starting with an analogy to how ants encode estimates of reward for state–action pairs in pheromone trails in the environment (another way to store a "Q" table in a network). We then pivot to discussing unsupervised learning – including both clustering and multi-dimensional scaling. After discussing PCA and t-SNE (briefly), we pivot to describing the deep autoencoder and show an example of its use in an MNIST-like clustering task. Interactive demonstrations mentioned in this lecture include: * Marginal Value Theorem Explorer (to better understand discount rate): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/optimal_foraging_theory/mvt_explorer.html * Autoencoder Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/assv5cheln8xqp2tvzj1k/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes.pdf?rlkey=a3iyshlufzgkyfxl7gby85pe2&dl=0 An unabridged version of the whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/lgibsff4lhh0ezb1lnm41/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes-Full.pdf?rlkey=jayejujed8ervq8zsrddi1vcr&dl=0



Tuesday, April 21, 2026

Lecture 7D (2026-04-21): RNN's – Backpropagation Through Time (BPTT), Long Short Term Memory (LSTM), and Reservoir Computing/Echo State Networks (ESNs)

In this lecture, we continue our discussion of Recurrent Neural Networks (RNN's) as generalized forms of Time Delay Neural Network (TDNN) that can do time-series classification (and prediction) using an inductive bias that can pull in information from a wide range of times (well beyond the simple size of the neural network, due to the use of output feedback to maintain state). We discuss how these networks can be trained with Backpropagation Through Time (BPTT) and some limitations of this approach. This motivates the more constrained Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures, which mitigate some issues with training general RNN's. We then pivot to a different approach entirely -- using recurrent neural networks as untrained "reservoirs" whose outputs are dynamical encoders that spread out temporal patterns into spatial ones that can be learned with a single-layer perceptron. We demonstrate this using an Echo State Network (ESN) and walk through how even small networks can provide significant separability for time series. We also have a discussion of how these approaches can be used for predicting chaotic time series, with applications in finance as well as digital twins (e.g., for manufacturing systems).

Interactive demonstrations connected to this lecture can be found at:

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/jdwe24zsmevhmaxl7x954/IEE598-Lecture7D-2026-04-21-RNNs-BPTT_LSTM_and_Reservoir_Computing-Notes.pdf?rlkey=22dv6950zcsjl98e0o96de11q&dl=0



Thursday, April 16, 2026

Lecture 7C (2026-04-16): Recurrent Networks and Temporal Supervision

In this lecture, we finish up our coverage of supervised learning of feedforward multi-layer perceptrons with a discussion of how the Convolutional Neural Network imposes an inductive bias that simplifies training and pays off for images but may not work so well for text strings. We then shift our focus to recurrent networks with temporal supervision, which may help to provide a solution when highly local inductive biases aren't effective (as in for text and time-series analysis). We discuss several coincidence detectors from neuroscience in the context of hearing and vision, and we use them to motivate Time Delay Neural Networks (TDNNs) as our bridge to Recurrent Neural Networks (RNNs). This allows for analogies to be made to Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filters. We close by transitioning from a basic output-feedback configuration to a generic RNN with hidden states but effectively no "layers." We will pick up next time with backpropagation-through-time (BPTT), Long Short Term Memory (LSTM), reservoir computing (Echo State Networks, ESN's), and an introduction to reinforcement learning. Interactive demonstration widgets related to this lecture can be found at:

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/pi8vxjrn6gbftpdab977w/IEE598-Lecture7C-2026-04-16-Recurrent_Networks_and_Temporal_Supervision-Notes.pdf?rlkey=1xltyg1ttcpyqhvtdquczxjr5&dl=0



Tuesday, April 14, 2026

Lecture 7B (2026-04-14): Feeding Forward from Neurons to Networks (SLP, RBFNN, MLP, and CNN)

In this lecture, we move from the basics of learning foundations from the last lecture into models of neurons that can be combined to form machine learning tools. We start with the single-layer perceptron (SLP), explain where the term "weights" comes, and describe how it can linearly separate a space. We then introduce a hidden layer of receptive field units (RFU's) and discuss how Radial Basis Function Neural Networks use Gaussian or Logistic RBF's as nonlinear projections into high-dimensional space that Cover's theorem suggests should be more likely to e linearly separable. After demonstrating how RBFNN's work, we then introduce Cybenko's Universal Approximation Theorem (UAT) and use it to motivate looking for other (and deeper) latent structures. That leads us to the Multi-Layer Perceptron (MLP), backpropagation, and the Convolutional Neural Network.

Interactive widgets referenced in this lecture include:

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/t2aoepucn0swlkvisococ/IEE598-Lecture7B-2026-04-14-Feeding_Forward_from_Neurons_to_Networks-Notes.pdf?rlkey=s5pr1zdrnup2ca1nthf7zxp3n&dl=0



Popular Posts