Thursday, April 28, 2022

Lecture 8C (2022-04-28): Elementary Cellular Automata, Stochastic Cellular Automata, and Cellular Evolutionary Algorithms

In this lecture, we review Elementary Cellular Automata (ECA's) and some classic rules and how to interpret them. We then transition to a brief discussion of hardware-based cellular automata for things like the Ising model, which allows for hardware-based annealing machines that perform simulated annealing (SA) optimization very fast. That lets us transition to discussing stochastic CA's (SCA) and then conclude with an introduction to cellular evolutionary algorithms (cEA's) which put different selective pressures on the population based on the distribution of the individuals in a space (where CA-like neighborhoods define the subpopulations on which a GA is to act, as if the GA was the CA update rule). That concludes the class for Spring 2022.

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/0k3lcpm50hpwygt/IEE598-Lecture8C-2022-04-28-Elementary_Cellular_Automata-Stochastic_Cellular_Automata-and-Cellular_Evo_Alg.pdf?dl=0



Tuesday, April 26, 2022

Lecture 8B (2022-04-26): Elementary Cellular Automata, Computation, and Experimentation

In this lecture, we move on from generic Interacting Particle Systems (IPS) to the specific case of (deterministic) Cellular Automata (CA). We spend a lot of time on Elementary CA's (ECA) and the complex behaviors that they can generate, as cataloged by Wolfram's _A New Kind of Science_ (NKS). We discuss density classification, pattern recognition, and the mapping from CA's to (recurrent) neural networks (RNNs/ANNs). By showing that certain ECA rules (like "Rule 110" in particular) are Turing complete, we effectively show that Recurrent Neural Networks/Time Delay Neural Networks (TDNNs) are themselves Turing complete (i.e., they can execute any computable function). We also discuss a few basic 2D CA's. Throughout the lecture, NetLogo (Web) is used to demonstrate the CA's, which allows us to introduce agent-based modeling a bit (including a Particle Swarm Optimization (PSO) empirical example).

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/gx54i8wt6gn2uow/IEE598-Lecture8B-2022-04-26-Elementary_Cellular_Automata-Computation-and-Experimentation.pdf?dl=0



Sunday, April 24, 2022

Lecture 8A (2022-04-21): Complex Systems Approaches to Computation – Interacting Particle Systems and the Voter Model

In this lecture, we introduce a short unit on complex systems approaches that intersect with computation and algorithms. We start with Interacting Particle Systems (IPS), which are a class of mathematical models describing systems that interact based on location or contact with each other. There are "Self-Organizing Particle Systems (SOPS)" that describe hypothetical agents that might be designed to build or maintain structures in engineered systems. Or there are more generic IPS models meant to understand phenomena in population dynamics or community ecology. We focus on the "Voter Model", which can be viewed as a model of consensus in agents that randomly trade opinions or (equivalently) fixation in evolutionary systems that are not under selection. We analyze the Voter Model as a non-ergodic Markov chain with absorbing states and then show how a dual process that represents a sort of "contact tracing" of consensus backward in time is ergodic and can thus be analyzed with a suite of mathematical tools. One of those tools helps us prove the possibly counterintuitive result that when the voter model operates in fewer than two dimensions, it will reach consensus/fixation with probability one but will have non-zero probability of never reaching consensus/fixation in three or more dimensions. Thus, the dimensionality of the (translation invariant) neighborhoods matters. We then prepare for our next lecture, where we'll introduce Cellular Automata (CA), a deterministic interacting particle system that nevertheless can produce fascinating patterns that help demonstrate how computation can be embodied in space.

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/5bt6uinpkidw2p4/IEE598-Lecture8A-2022-04-21-Complex_Systems_Approaches_to_Computation-Interacting_Particle_Systems_and_the_Voter_Model.pdf?dl=0



Lecture 8A-Intro (2022-04-21): Clarifications about Final Exam and Final Project

This 20-minute segment provides an overview of the format and expectations for the final exam for the Spring 2022 section of IEE/CSE 598 (Bio-Inspired AI and Optimization). It also covers questions about the final project.



Tuesday, April 19, 2022

Lecture 7H (2022-04-19): From Spiking Neural Networks to Continual Learning and Beyond

In this lecture, we continue our discussion of neuromorphic engineering, with a focus on spiking neural network (SNN) architectures. We review the basic dynamics of an "action potential" and mention a few ODE models (Hodgkin–Huxley, integrate-and-fire, etc.) of such dynamics. Modern SNN platforms, such as SpiNNaker and more modern IBM TrueNorth and Intel Loihi hardware/"chip" solutions, implement hardware and software emulations of these dynamics in an effort to simulate large networks of spiking neurons (as opposed to mathematical abstractions, as in more traditional ANNs). We also discuss "neuromemristive" platforms that make use of hysteretic "memristors" as very simple artificial spiking neurons and mention an example from Boyn et al. (2017, Nature Communications) of a crossbar architecture of such memristors that can accomplish unsupervised learning for pattern recognition/classification. We then move on to discussing how backpropagation (gradient descent) can now be used for supervised learning on spiking neural networks (to simplify training significantly). That brings us to discuss state-of-the-art and beyond-state-of-the-art nature-inspired directions, such as using neural network "sleep" to improve continual learning and introducing "neuromodulation" to add further richness to artificial neural networks.

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/31ti6sni3zpkw64/IEE598-Lecture7H-2022-04-19-From_Spiking_Neural_Networks_to_Continual_Learning_and_Beyond.pdf?dl=0



Thursday, April 14, 2022

Lecture 7G (2022-04-14): Decentralized Associative/Hebbian Learning and Intro. to Spiking Neural Networks and Neuromorphic Computing

This lecture starts with a basic psychological and neurophysiological introduction to learning -- non-associative and associative. Most of the focus is on associative learning, broken into classical conditioning (Pavlov) and operant conditioning (Skinner). Psychology–Machine-Learning Analogies are made between classical conditioning and unsupervised learning as well as between operant conditioning and supervised and reinforcement learning. In principle, with the right hardware, a mechanism for associative learning can underly all other learning frameworks within machine learning. With that in mind, spike-timing-dependent plasticity (STDP) is introduced as a neuronal mechanism for associative learning. In particular, we introduce Hebbian learning ("fire together, wire together") in the spiking sense and then conceptualize it in the neuronal weights case (going from temporal coding to spatial/value coding). We then discuss a simple unsupervised pattern recognition/classification example using Hebbian updating on the weights. We then pivot to introducing true spiking neural networks (SNNs) and modern versions, such as SpiNNaker, IBM TrueNorth, and Intel Loihi. Next time, we will end our unit on Artificial Neural Networks/Spiking Neural Networks with a discussion of an example of an analog SNN (built with memristors) that does unsupervised pattern recognition as well as some more advanced directions with SNNs.

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/kefruibf8vqe2u0/IEE598-Lecture7G-2022-04-14-Decentralized_Associative_Hebbian_Learning_and_Intro_to_SNN_and_Neuromorphic_Computing.pdf?dl=0



Tuesday, April 12, 2022

Lecture 7F (2022-04-12): ANN Reinforcement Learning, Unsupervised Learning, Multidimensional Scaling, & Hebbian/Associative Learning

In this lecture, we introduce applications of ANN outside of conventional supervised learning. In particular, we briefly discuss reinforcement learning (RL and "deep Q-learning") and then introduce unsupervised learning. As examples of unsupervised learning, we discuss clustering, autoencoders, and multidimensional scaling (MDS). We end with a brief introduction to Hebbian/associative learning, which will be picked up next time as we start talking about spiking neural networks (and spike-timing-dependent plasticity, STDP).

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/a4l4k7rs589jcjn/IEE598-Lecture7F-2022-04-12-ANN_Reinforcement_Learning-Unsupervised_Learning-Multidimensional_Scaling-and-Hebbian_Associative_Learning.pdf?dl=0



Thursday, April 7, 2022

Lecture 7E (2022-04-07): RNNs and Their Training, LSTM, and Reservoir Machines

This lecture continues our introduction of Recurrent Neural Networks (RNNs), starting with a quick refresher on time-delay neural networks (TDNNs). From TDNNs, we discuss basic RNNs and then a process of backpropagation through time (BPTT) that can (in principle) be used to train RNN's in supervised learning tasks. We then discuss how Long Short Term Memory (LSTM) is a regularized RNN structure that is easier to train and has been very successful in many domains, such as Natural Language Processing (NLP). We then pivot to discussing another regularized RNN, the Echo State Machine/Reservoir Machine. Reservoir computing builds a randomized RNN as a kind of encoder that converts temporal signals to spatiotemporal representations that can then be treated as features for an input to a simple feed-forward, single-layer neural network decoder. We then close our discussion of supervised learning with a discussion of training methodology (train, validate, and test) and then open a discussion of reinforcement and unsupervised learning that we will continue next time.

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/s/pxh77wrxv97um2r/IEE598-Lecture7E-2022-04-07-RNNs_and_their_training-Reservoir_machines-Reinforcement_learning.pdf?dl=0



Tuesday, April 5, 2022

Lecture 7D (2022-04-05): CNNs, Insect Brains, More Complex Neural Networks (TDNNs and RNNs)

In this lecture, we complete our discussion of how backpropagation allows for gradient descent to train deep neural networks (feed-forward, multi-layer perceptrons in general). We then pivot to talking about more regularized feed-forward neural networks, like convolutional neural networks (CNNs) that combine convolutional layers with pooling layers and thereby simplify training while producing a low-dimensional feature set (relative to the dimensions of the input). After a brief discussion of the feed-forward architecture of the insect/pancrustacean brain, we shift to discussing time-delay neural networks (TDNNs) as an entry point into discussing recurrent neural networks (RNNs) and reservoir machines, which we will pick up on next time.



Popular Posts