Monday, April 28, 2025

Lecture 8A (2025-04-29): Complex Systems Models of Computation – Cellular Automata and Neighbors

This lecture introduces approaches for understanding (and building) computational systems that emerge out of Complex Adaptive Systems (CAS). It first motivates the idea that systems of many, interconnected parts that each are relatively easy to understand in isolation can come together in a system whose network of interactions leads to emergent global phenomena that cannot be predicted from the properties or behaviors of any individual component. We then focus on the role of space in the functions and properties that emerge at a global level. We do this through the example of the Interacting Particle System (IPS) known as the "voter model", which can be viewed as a model for neutral evolution in spatially structured populations. We show that the dual process for the Voter Model is a time-reversed set of coalescing random walkers for which consensus in the model corresponds to whether walkers are sure to coalesce into a single walker in the past of the dual process. This lets us apply Pólya's recurrence theorem and show that consensus is guaranteed (with probability 1) for 1- and 2-dimensional lattices but not guaranteed for lattices of 3 dimensions or higher. This implies that neutral evolution (for example) in a 3D spatial structure may not always lead to fixation on one genotype. We then pivot to introducing Elementary Cellular Automata (ECA) and describe a few rules that demonstrate how they work. We close the regular lecture by connecting CA's back to neural networks (the previous unit) and evolutionary algorithms (the first unit), thus introducing the Cellular Evolutionary Algorithm (cEA). We then extend the lecture a little longer than usual in order to do a demonstration of several ECA's in NetLogo, including a demonstration of how to combine two ECA rules to generate a reliable density classifier.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/gfna3a2aj49fq2a4sovst/IEE598-Lecture8A-2025-04-29-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=mo5jag4axljxrbpnq6wkk9egh&dl=0



Thursday, April 24, 2025

Lecture 7F (2025-04-24): Spiking Neural Networks and Neuromorphic Computation

This lecture explores how real and artificial brains learn using spikes. We begin by reviewing the structure and behavior of spiking neurons, focusing on the Leaky Integrate-and-Fire (LIF) model and the efficiency of sparse, event-driven temporal coding. We then introduce Spike-Timing-Dependent Plasticity (STDP), a biologically inspired learning rule that adjusts synaptic strength based on the relative timing of spikes. From there, we survey major neuromorphic hardware platforms—SpiNNaker, TrueNorth, and Loihi—highlighting their architectural differences and support for learning. We then examine memristor-based crossbar arrays as an analog substrate for STDP, including a case study from Boyn et al. (2017). Finally, we return to Hebbian learning as a conceptual foundation ("fire together, wire together") and explore how simple local decentralized unsupervised Hebbian-like learning rules for conventional ANNs can also produce meaningful clustering behavior. We close with a discussion of future directions, including neuromodulation, synaptic adaptability, and recent research on using sleep-inspired replay to prevent catastrophic forgetting in spiking neural networks.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/8mqjreoitin3qadzk9ofm/IEE598-Lecture7F-2025-04-24-Spiking_Neural_Networks_and_Neuromorphic_Computation-Notes.pdf?rlkey=l83a286aig0fpibafuvofr0hc&dl=0



Tuesday, April 22, 2025

Lecture 7E (2025-04-22): Learning without a Teacher – Unsupervised and Self-Supervised Learning

This lecture covers unsupervised and self-supervised learning, focusing on how both brains and machines discover structure without external labels or rewards (akin to non-associative learning). It begins with examples of unsupervised learning, including clustering, principal component analysis, and autoencoders, and then explores how biological systems like the olfactory pathway in insects organize complex sensory input into compressed, low-dimensional codes. We take a detailed look at the structure of the honeybee brain, examining how floral odors are transformed through the antennal lobe’s glomerular code into organized neural representations. We then transition into self-supervised learning (akin to latent learning) by introducing predictive coding and sensorimotor prediction, highlighting how brains use internal models to anticipate and correct sensory input. Finally, we close by discussing how modern AI systems like GPT (and BERT) leverage self-supervised objectives to build rich internal representations from raw data.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/qwezfleqplmxtiobfpoew/IEE598-Lecture7E-2025-04-22-Learning_without_a_Teacher-Unsupervised_and_Self-Supervised_Learning-Notes.pdf?rlkey=4k5o8j8no3s9x7xc5di676qz3&dl=0





Thursday, April 17, 2025

Lecture 7D (2025-04-17): Reinforcement Learning – Active Learning in Rewarding Environments

In this lecture, we introduce reinforcement learning (RL) with motivations from animal behavior and connections to optimization metaheurisitcs such as Ant Colony Optimization (ACO) and Simulated Annealing (SA). We start by returning to a simple model of pheromone-trail-based foraging by ants (reminiscent of Ant Colony Optimization (ACO)) and formalize the components of the ant action in terms of quality tables for (state, action) pairs, as would be used in RL. We then introduce the quality Q(s,a) function and Q-learning, including two different methods of exploration (epsilon-greedy and softmax) with connections to how different species of ants respond to pheromones. We discuss Deep Q Networks (DQN's), as a connection to neural networks, and then move on to motivating an interpretation of the discount factor using Charnov's Marginal Value Theorem (MVT) of optimal foraging theory (OFT). We close with a discussion of the Matching Law from psychology and how a group of RL agents will converge to a social version of the Matching Law, the Ideal Free Distribution (IFD). Next time, we will cover unsupervised and self-supervised learning, which are approaches where the learning happens even without reward.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/gyux79ukkcs0n7buizfr1/IEE598-Lecture7D-2025-04-17-Reinforcement_Learning-Active_Learning_in_Rewarding_Environments-Notes.pdf?rlkey=ix5qf4a5yz97ppsx97h6sphao&dl=0



Tuesday, April 15, 2025

Lecture 7C (2025-04-15): Recurrent Networks and Temporal Supervision

This lecture focuses on Recurrent Neural Networks (RNNs), which leverage delays within neural networks as storage elements that can be used to make inferences about temporal patterns. We start with an overview of coincidence detectors thought to be used for spatial localization and motion detection in the auditory (Jeffress model) and visual systems (Hassenstein–Reichardt model). This motivates the introduction of Time Delay Neural Networks (TDNNs), which generalize the use of delay lines used in the coincidence-detection circuits. We show how feed-forward TDNNs can be used to identify finite-duration patterns (where the number of neural elements necessary must scale up with the length of the pattern) and draw connections to Finite Impulse Response (FIR) filters. Then we shift to Recurrent Neural Networks and draw analogies to Infinite Impulse Response (IIR) filters that are able to identify patterns over very long durations of input while only using a few neurons (leveraging the implicit memory in the output state(s)). That brings us to Long Short-Term Memory (LSTM) (and the Gated Recurrent Unit, GRU), which is a popular form of RNN that has become less emphasized since the growth in the use of Transformers. We close by showing that randomly weighted, fixed RNN's can be used as "reservoirs" in Echo State Networks as feature extractors that spread out temporal patterns over space, allowing for simple feed-forward decoders (and possibly multiple of them sharing the same reservoir decoder resource) to do complex time-series analysis. These reservoirs can also be instantiated in other dynamical media, such as actual water reservoirs and even materials embedded within soft robots – each of these examples fit within the larger area of "Reservoir Machines" or "Reservoir Computing." Next time, we focus on Reinforcement Learning and its connections to animal foraging.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/u5qjwvqwwb6ok378kw2lm/IEE598-Lecture7C-2025-04-15-Recurrent_Networks_and_Temporal_Supervision-Notes.pdf?rlkey=gqhyq06wdzpw0m6t4fo1hfxqe&dl=0



Thursday, April 10, 2025

Lecture 7B (2025-04-10): Feeding Forward from Neurons to Networks

In this lecture, we present the foundations of supervised learning of feedforward neural networks, starting with the original inspiration from basic models of the neuron. We start with a review of the activation of a basic neuron and then map that to the Single Layer Perceptron (SLP), which we describe as a tool for binary classification of linearly separable data. Then, to extend these capabilities to data sets that are not linearly separable, we introduce Radial Basis Function Neural Networks (RBFNNs), whose Receptive Field Units (RFUs) act as a hidden layer that allow the RBFNN to do much more than the SLP. Thus, the RBFNN is our first example of a single-hidden-layer neural network. This gives us an opportunity to discuss Universal Approximation Theorems (UAT's) which help to explain why the RBFNN is so much more capable than the SLP. Despite its strengths, the RBFNN is not convenient to train. So, guided by UAT, we introduce the Multi-Layer Perceptron (MLP), which is a generalized version of the SLP that includes a hidden layer whose non-linear activation functions allow for universal approximation. We discuss how backpropagation can be used to train MLP's efficiently so long as activation functions are differentiable. We close with an introduction to Convolutional Neural Networks (CNNs) as an MLP that implements receptive fields very similar to RBFNN but in a way that is more flexible/trainable than the RBFNN. Next time, we will start discussing recurrent neural networks, their connection to biology, and how to train them.

Whiteboard notes for this lecture are available at:
https://www.dropbox.com/scl/fi/t23j4gupde7zyue83guz3/IEE598-Lecture7B-2025-04-10-Feeding_Forward_from_Neurons_to_Networks-Notes.pdf?rlkey=vpyd1htoswq54reb892clgch4&dl=0



Tuesday, April 8, 2025

Lecture 7A (2025-04-08): Neural Foundations of Learning

This lecture opens the unit on Neural Computation and Learning, which discusses the neurobiological underpinnings of learning in biological systems and attempts at developing similar capabilities in artificial systems with artificial neural networks (including spiking neural networks). In this lecture, learning, memory, and neuroplasticity are introduced alongside a basic model of the canonical neuron with axons, dendrites, and action potentials. Time is spent discussing the differences in the function and costs of working, short-term, and long-term memory as well as the three different neuronal mechanisms underlying these three different forms of memory. Then the basics of biological learning, from sensory adaptation to non-associative learning (habituation and sensitization) to associative learning/conditioning (classical and operant) to latent learning, are presented along with basic models of how these different forms of learning can be built up from mechanisms of neuroplasticity discussed earlier. Finally, the different modalities of machine learning (unsupervised, self-supervised, reinforcement, and supervised learning) are presented and connected to the best-fitting biological learning paradigms (as well as potential neuronal mechanisms that could be used ot build such capabilities).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/mlw3u4yf19fbqnypjc39z/IEE598-Lecture7A-2025-04-08-Neural_Foundations_of_Learning-Notes.pdf?rlkey=5mhnsfmmgtiad050u69qov80x&dl=0



Thursday, April 3, 2025

Lecture 6C (2025-04-03): Distributed AI and Swarm Intelligence, Part 3 – Bacterial Foraging Optimization (BFO) and Particle Swarm Optimization (PSO)

In this lecture, we discuss two other examples of Swarm Intelligence applied to the numerical optimization space – Bacterial Foraging Optimization (BFO) and Particle Swarm Optimization (PSO). Both of these make use of a population moving through an unconstrained environment that respond to both their local experience of an optimization objective as well as the positions of others around them. In BFO, which is inspired by the "run-and-tumble" behavior of certain kinds of flagellated bacteria, bacteria attempt to minimize exposure to a surrounding chemical (the optimization objective to minimize) while also responding to attractive and repellant chemicals emitted by all other bacteria (that have an intensity that rolls off with distance from the emitter). Bacteria "run" in a consistent direction either for a specified amount of algorithm steps or when the sensed cost (from both the optimization objective and the attractant and repellant signals) increases, at which point they "tumble" to a new random direction. After the "lifetime" of all bacteria passes, half of them with the most cumulative exposure die and are replaced by clones of the other half. At certain points, bacteria get randomly moved to other parts of the decision space regardless of how effective their current search has been. This process makes BFO computationally costly but very effective at global optimization and even tracking problems (so long as the reference objective to track is changing slowly enough). PSO has a much simpler implementation where each self-propelled particle has a position and a velocity and an "inertia" that prevents rapid changes in heading. Here, each SPP in PSO remembers the position of its best location as well as the best position of every other SPP in the swarm, and these two positions act like attractors that pull the particle to them. Both PSO and BFO are swarm-intelligence algorithms, but neither incorporate the notion of "stigmergy" (the indirect coordination of individuals through modifications and responsiveness to the shared environment). ACO, which was discussed earlier, is an example of stigmergy (as it maintains a matrix of long-lasting chemical depositions that have their own state, independent of the "ants" that deposited them and the ants that respond to them).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/2opklen1vwap6qx1v3vby/IEE598-Lecture6C-2025-04-03-Bacterial_Foraging_Optimization_BFO_and_Particle_Swarm_Optimization_PSO-Notes.pdf?rlkey=tn8ghxsrhxvuip1xq5drd8vte&dl=0



Tuesday, April 1, 2025

Lecture 6B (2025-04-01): Distributed AI and Swarm Intelligence, Part 2 – ACO and Introduction to Bacterial Foraging Optimization (BFO)

This lecture primarily focuses on describing the structure and operation of Ant System (AS), a precursor to the Ant Colon Optimization (ACO) metaheuristic. It discusses the key algorithmic components of the approach and briefly describes the ant trail-laying behavior that inspired it. At the end of the lecture, Bacterial Foraging Optimization (BFO) is introduced. A full treatment of BFO will be covered in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/oreo91lww327dcbtwq6e3/IEE598-Lecture6B-2025-04-01-ACO_and_Introduction_to_Bacterial_Foraging_Optimization_BFO-Notes.pdf?rlkey=66jx4pfv369kbcngicntv3lem&dl=0



Thursday, March 27, 2025

Lecture 5D/6A (2025-03-27): Simulated Annealing Wrap-up and Distributed AI and Swarm Intelligence, Part 1 - Introduction to Ant Colony Optimization (ACO)

In this lecture, we wrap-up our coverage of Simulated Annealing (SA) and then shift to a new unit on Swarm Intelligence, with our first topic in that unit being Ant Colony Optimization (ACO). We start the lecture with a description of Monte Carlo sampling, which leverages a Law of Large Numbers to provide a methods for approximating integrals using random sampling from a high-dimensional space. This allows us to introduce the Metropolis–Hastings algorithm, a Markov Chain Monte Carlo approach to sampling from arbitrary distributions (originally the Boltzmann distribution for the Metropolis algorithm, which was purely a Boltzmann sampler). We then show how to use the Mentropolis algorithm within Simulated Annealing, which combines it with an annealing schedule that turns an MCMC sampler into an optimizer that starts out as an explorer and finishes as an exploiter. Simulated Annealing also helps import conceptual frameworks from physics (specifically statistical mechanics) into optimization and even Machine Learning more broadly. After finishing the coverage of SA, we introduce Ant System (AS), an early version of Ant Colony Optimization (ACO), which is a combinatorial optimization metaheuristic based on the trail-laying and recruitment behaviors of some ants. We will conclude ACO next time and move on to other Swarm Intelligence algorithms.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/b0wmj4a3lnbgyrs8rmts5/IEE598-Lecture5D_6A-2025-03-27-Simulated_Annealiung_Wrap_Up_and_Distributed_AI_and_Swarm_Intelligence_Part_1_Introduction_to_Ant_Colony_Optimization_ACO-Notes.pdf?rlkey=y49aa1x0oi0k53u5x5tv9kngs&dl=0



Tuesday, March 25, 2025

Lecture 5C (2025-03-25): Toward Simulated Annealing: Introduction to Boltzmann Sampling and Monte Carlo Integration

In this lecture, we continue our march toward Simulated Annealing by ensuring that we have the necessary foundations in information theory to support discussing thermodynamics and statistical mechanics. This allows us to introduce Boltzmann sampling, which will be used in Simulated Annealing, and the computational applications from physical chemistry that first inspired the creation of Boltzmann sampling. In particular, we introduce Monte Carlo integration and start to discuss how Boltzmann sampling can greatly reduce the number of samples necessary to estimate macroscopic variables of interest to physicists as they investigate thermodynamic equations of state. This also effectively allows us to introduce Markov Chain Monte Carlo (MCMC) methods, of which the Metropolis algorithm is recognized as the first popular MCMC method (and the foundation of Simulated Annealing).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/0nmyk60h6xtfa335x2bpd/IEE598-Lecture5C-2025-03-25-Toward_Simulated_Annealing-Introduction_to_Boltzmann_sampling_and_Monte_Carlo_integration-Notes.pdf?rlkey=ig1tbmmyenqpt7b6vwtnzbkem&dl=0



Thursday, March 20, 2025

Lecture 5B (2025-03-20): From Maximum Entropy (MaxEnt) Methods Toward Optimization by Simulated Annealing

In this lecture, we review the concept of entropy from physics and introduce information-theoretic connections to the increase in entropy and the erasure of latent information. We use that to motivate why maximum entropy probability distributions are the "least biased" choices given a set of constraints. We give a few examples of maximum entropy distributions and take a little time to introduce Maximum Entropy (MaxEnt) methods in machine learning and data science that have been successful in Natural Language Processing (NLP) as well as many other fields, from archeology to ecology. Equipped with maximum entropy, we introduce the Boltzmann distribution as a key tool in the implementation of Simulated Annealing, and we highlight connections between the Boltzmann distribution and "softmax" (as well as "softmin") from Machine Learning/AI. We will conclude our introduction of Simulated Annealing in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/dv7gei6rbb0yltqekmhrb/IEE598-Lecture5B-2025-03-20-From_Maximum_Entropy_MaxEnt_Methods_Toward_Optimization_by_Simulated_Annealing-Notes.pdf?rlkey=jaifjaxazwn3u0efui0lthlky&dl=0





Tuesday, March 18, 2025

Lecture 5A (2025-03-18): Introduction to Simulated Annealing and Entropy

This lecture introduces Simulated Annealing (SA) and the physical background from thermodynamics, statistical mechanics, and information theory needed to understand its function. After describing a general outline of SA, we pivot to discussing entropy (in terms of microstates and macrostates) and why systems move toward macrostates (distributions of microstates) with higher entropy. We link this to bias in stochastic modeling, which sets us up to start discussing Maximum Entropy (MaxEnt) methods in the next lecture. We will eventually bring this discussion back to Simulated Annealing after briefly discussing both MaxEnt Methods and Markov Chain Monte Carlo (MCMC) methods first (starting in the next lecture).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/dabgveu9swzj6d4dcveza/IEE598-Lecture5A-2025-03-18-Introduction_to_Simulated_Annealing_and_Entropy-Notes.pdf?rlkey=3dhycmnzij3fdd3fejes1wlvk&dl=0



Tuesday, March 4, 2025

Lecture 4C (2025-03-04): Niching Methods in Multi-Modal Optimization

In this lecture, we cover popular niching/niche-preserving methods for multi-modal optimization (which are also used in multi-objective optimization), which allow for multi-modal optimization techniques to find as many local fitness peaks as possible while minimizing representation (and maintaining high selective pressure) within each peak. These niching methods include fitness sharing methods (including variations inspired by k-means clustering), clearing and crowding methods, and the popular restricted tournament selection (RTS). Throughout the lecture, the computational complexity of different methods is highlighted so that at the end of the lecture (in detail in the PDF notes linked below) an assessment of the costs and benefits of each can be compared.

Whiteboard notes for this lecture (which include a list of computational complexities that was not covered in detail during the lecture) are available at:
https://www.dropbox.com/scl/fi/gdtk02v4gy9boxgj0vdbf/IEE598-Lecture4C-2025-03-04-Niching_Methods_in_Multi-Modal_Optimization-Notes.pdf?rlkey=gvfnxo20j9m76d4hfgvyhluzm&dl=0



Thursday, February 27, 2025

Lecture 4B (2025-02-25): From DGA/PGA to Niching Methods for Multi-Modal Optimization

In this lecture, we start by introducing distributed and parallel genetic algorithms (DGA/PGA) that not only have the potential to leverage parallel hardware resources but, perhaps surprisingly, can improve the performance of canonical genetic algorithms (GA's) through population-genetic effects that turn genetic drift in *meta-populations* into a source of diversity (instead of a force that reduces diversity, as it does in a single population). In particular, we introduce the idea of "shifting-balance theory" from Sewall Wright, which is one of the early models that attempts to understand the effect of limited gene flow on drift and selective pressure in complex population structures. This allows us to revisit the power and value of diversity in population-based evolutionary algorithms for optimization, and we use that to introduce the field of multi-modal optimization. Following this idea, we close with an introduction to niching methods for multi-modal optimization, starting with fitness sharing. We will pick up with this topic in our next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/dqfz3d2rienofebqpa30e/IEE598-Lecture4B-2025-02-25-From_DGA_PGA_to_Niching_Methods_for_Multi_Modal_Optimization-Notes.pdf?rlkey=fknh18uafdqh2ah2lqsfi7lcb&dl=0



Tuesday, February 25, 2025

Lecture 3C/4A (2025-02-25): Pareto Ranking and Moving from Communities to Meta-Populations (DGA, PGA)

In this lecture, we review the connections between Pareto optimality and concepts from both population genetics as well as community ecology. We discuss how "fitness" in multi-objective evolutionary algorithms (MOEA's) is about reducing the "distance" to the Pareto front, establishing a group of "non-dominated solutions" that are "equally fit." Thus, Pareto optimization is not only about maximizing fitness (arriving at the Pareto front) but also about completeness and diversity (i.e., maximizing spread of the non-dominated set across the Pareto front). We then introduce the notion of "Pareto ranking", which helps to formally establish a "fitness" concept that captures the "distance from Pareto front" even in cases where the Pareto front location is not known. We discuss how different notions of fitness come with different selective pressures and remind ourselves that high selective pressure can create challenges for maintaining diversity (and spread across the Pareto front). We then re-introduce fitness scaling (which we first discussed in the context of GA's) and fitness sharing/clearing. These are concepts that maximize the number of "niches" along the Pareto front while reducing the number of solution candidates within each niche. We discuss how NSGA-II and NSGA-III are dominant players in the MOEA space, (and well represented in off-the-shelf tools like gamultiobj in MATLAB and pymoo in Python). We then pivot to introducing the idea of "metapopulations" (populations of populations) and how this framework lets us use genetic drift as a tool for maintaining and exploring diversity (whereas before we only viewed drift as a force of reducing diversity). We will pick up next lecture building on this idea to introduce Distributed and Parallel GA's and tools for multi-modal optimization.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/kmqhx2po0o05c36vnh972/IEE598-Lecture3C_4A-2025-02-25-Pareto_Ranking_and_Moving_from_Communities_to_MetaPopulations-DGA_PGA-Notes.pdf?rlkey=dut1pok8mcusqq3sagljbz9yi&dl=0



Thursday, February 20, 2025

Lecture 3B (2025-02-20): Multi-Objective Genetic Algorithms: Weight/Vector-Based Approaches

In this lecture, we review the Pareto perspective of multi-objective optimization, emphasizing Pareto efficiency and the Pareto front. We draw connections to a blend of population genetics/evolution as well as community ecology (related to niche spaces and co-existence). We introduce Age–Fitness Pareto Optimization (AFPO) as a motivational example, as it uses the diversity preserving aspects of multi-objective optimization to improve the evolvability of single-objective metaheuristics used in evolutionary robotics (although note that in the video, it is said that AFPO maximizes age/generations, but it actually minimizes age/generations; this is fixed in the notes linked below). From there, we describe several classical multi-objective genetic algorithm approaches that either explicitly incorporate weighted combinations of objectives or instead create sub-populations for different objectives and blend individuals from those subpopulations. This motivates the idea of Pareto ranking (a  more modern approach to multi-objective evolutionary algorithms), which we will introduce (along with other diversity-preserving and diversity-enhancing mechanisms) next time.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/nwfytnxrlmqzvdoftdqtc/IEE598-Lecture3B-2025-02-20-Multi_Objective_Genetic_Algorithms-From_Weight_Based_Approaches_to_Pareto_Ranking-Notes.pdf?rlkey=pq81xpd5abfgikq6twsj73yys&dl=0



Tuesday, February 18, 2025

Lecture 3A (2025-02-18): Multi-Criteria Decision Making, Pareto Optimality, and an Introduction to Multi-Objective Evolutionary Algorithms (MOEA's)

In this lecture, we introduce multi-objective optimization as a subset of multi-criteria decision making, and we use a game-theoretic example to motivate the discussion (as game theory is a subset of multi-objective optimization). We define the Nash equilibrium concept and draw connections to multi-agent reinforcement learning. That then lets us pivot to other solution concepts for more general multi-objective problems, such as the knapsack problem. We discuss weighting/scalarization approaches (both convex combinations and Chebyshev scalarization), satisficing approaches (common in dynamic programming examples), and target approaches, and we highlight that each of these still has a subjective degree of freedom remaining. We then motivate Pareto-efficient sets and the Pareto frontier, which is an approach that minimizes all subjectivity by producing a set of possible outcomes (instead of a single "optimal" solution). We then close by introducing multi-objective evolutionary algorithms, whose population structure is naturally suited to solving for (samples of) set-based optima.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/lpb8i4mp93iegdjyccly2/IEE598-Lecture3A-2025-02-18-Multi_Criteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multi_Objective_Evolutionary_Algorithms-Notes.pdf?rlkey=hef5n0e77k3k7i0zqwg8cn0cs&dl=0



Thursday, February 13, 2025

Lecture 2C (2025-02-13): Immunocomputing: Genetic Approaches for Diverse Solution Portfolio

In this lecture, we introduce the field of "Immunocomputing", which studies information-processing mechanisms within the vertebrate immune system that help to maintain a diverse array of solutions to detecting and handling threats. Such dynamical systems must be have low false negative rates (i.e., detect as many threats as possible) while also having low false positive rates (i.e., avoid classifying artifacts of normal operations as being anomalous). We discuss the innate immune system, which is a highly conserved immune system to respond to general threats, and the adaptive/acquired immune system, which has immunological memory of past threats that it can use to deploy highly specific responses to those threats if found in the future. Particularly this second system has inspired Artificial Immune Systems (AIS), including negative selection (inspired by "central tolerance" in the thymus) and clonal selection. Both of these two algorithmic approaches form and maintain a diverse portfolio of threat/anomaly classifiers as opposed to clustering around a single good solution, as is often the case in Genetic Algorithms. This sets us up for our next unit on multi-criteria decision making, and so we close with a basic introduction to game theory/multi-objective optimization and Nash equilibria. We will introduce Pareto equilibria in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/ha6oe1rjwyzbwhnp4nsds/IEE598-Lecture2C-2025-02-13-Immunocomputing-Genetic_Approaches_for_Diverse_Solution_Portfolios-Notes.pdf?rlkey=qsljjcwafo03vrtoyc9u7de5d&dl=0



Tuesday, February 11, 2025

Lecture 2B (2025-02-11): Genetic Programming, Immunocomputing, and Artificial Immune Systems

In this lecture, we introduce Genetic Programming as an alternative to Evolutionary Programming that also is able to leverage the power of crossover (and thus the full Genetic Algorithm) when evolving executable structures, such as program code or decision trees. We start with a discussion of Linear Genetic Programming (LGP) on register-based languages (like assembly language) that can be represented as simple sequences of instructions. We then pivot to discuss tree-based Genetic Programming (GP) that operate on Abstract Syntax Tree (ASTs) representations of more traditional programming code (e.g., Python, C++, etc.). We discuss how crossover is implemented in these AST, and we identify several applications such as genprog which uses GP to do automated software repair and optimization. This sets us up to introduce immunocomputing in the next lecture (which we were unable to get to due to time constraints today) which will show how genetical methods from the immune system can be used to generate and maintain useful diversity for applications such as cyber-security and general anomaly detection.

Whiteboard notes for this lecture are available at:
https://www.dropbox.com/scl/fi/nonw19ergjmkk1vn3pf9g/IEE598-Lecture2B-2025-02-11-Genetic_Programming_Immunocomputing_and_Artificial_Immune_Systems-Notes.pdf?rlkey=bgt9guyhvu5qi79jnshxwycxa&dl=0



Thursday, February 6, 2025

Lecture 2A (2025-02-06): Evolutionary Computing from Optimization to Programming (Mutation, CMA-ES, and Evolutionary Programming)

In this lecture, we transition from the Genetic Algorithm to alternative approaches in Evolutionary Computing, particularly Evolutionary Strategies and Evolutionary Programming. We start by highlighting that the Genetic Algorithm's typical mutation operators are a bit crude in that they use the same mutation policy for every decision variable. Motivated by this, we introduce Evolution Strategies, which are alternative approaches that allow for different mutation rates along different dimensions of the decision space. This lets us introduce the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is a popular global optimization approach that approximates the fitness objective with a multivariate normal sampling distribution. We then transition to more creative directions by showing how Evolutionary Programming, which removes crossover, allows for the evolution of Finite State Machines (FSM's) which are equivalent to computer programs that are automatically designed for a desired function. Next time, we will introduce Genetic Programming, which allows for evolving code directly using a more traditional GA (with crossover), and then we will transition to Immunocomputing, another genetically inspired approach to automated creativity.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/cunoa5zqvx29uj3pm2n8j/IEE598-Lecture2A-2025-02-06-Evolutionary_Computing_from_Optimization_to_Programming-Notes.pdf?rlkey=tpqz2o3u36obp28xf4ipw7se2&dl=0



Tuesday, February 4, 2025

Lecture 1F (2025-02-04): GA Wrap Up – Options for Selection, Crossover, Mutation, and Extensions

In this lecture, we wrap up our discussion of the basic Genetic Algorithm (GA) and its hyper-parameters. We focusing on differences in implementation of the population structure as well as different genetic operators for selection and recombination and mutation (although we will have to finish the last few bits of our mutation discussion in the next lecture).  Throughout the lecture, we emphasize the different roles for selection (and selective pressure), genetic drift, mutation, and even migration to some extent. This emphasizes the "evolution in a drift field" diagram that we introduced several lectures ago. Too much selective pressure can lead to premature convergence on local suboptima, and too little selective pressure can lead to solutions favored randomly due to genetic drift. Too much mutation can eliminate the possibility of fine-tuning solutions, and too little mutation is not enough of a guard against drift and selection purging all diversity from the population. Dynamically altering some of these parameters during the execution of a GA can provide a compromise among these tradeoffs.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/y3r7mum9hjcnr7elh4eww/IEE598-Lecture1F-2025-02-04-GA_Wrap_Up-Options_for_Selection_Crossover_Mutation_and_Extensions-Notes.pdf?rlkey=1uekaz5jwug5467yng4jlxgdi&dl=0



Thursday, January 30, 2025

Lecture 1E (2024-01-30): The Basic Genetic Algorithm and Its Implementation

In this lecture, we reveal the basic architecture of the simple GA. We then start to introduce three common selection operators (fitness-proportionate selection, ranking selection, and tournament selection), crossover operators, and mutation operators, which we will finish in the next lecture. Throughout this lecture, the basic tension between natural/artificial selection and drift is emphasized -- there is a downside to having selection operators that have high selection pressure (because they get stuck in suboptimal traps), but very low selection pressure can cause small populations to stagnate due to drift.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/wbvlvpj1wkft5903pg1ad/IEE598-Lecture1E-2024-01-30-The_Basic_Genetic_Algorithm_and_Its_Implementation-Notes.pdf?rlkey=x81e1h1lv42vkwt8lphcr7uqr&dl=0



Wednesday, January 29, 2025

Lecture 1D (2025-01-28): The Four Forces of Evolution and The Drift Barrier

In this lecture, we review the four forces of evolution -- mutation, migration/gene flow, genetic drift, and natural selection -- and the contribution that each one makes to either increasing or decreasing variance in a population over time. So a more complete picture of evolution is a tense combination of these forces, each of them leading to different kinds of effects on the distribution of alleles (strategies) in a population. We discuss how the tendency for natural selection to produce higher quality solutions is ultimately limited by genetic drift that dominates when populations have low fitness diversity (low selective pressure), and we discuss how this sets up a speed–accuracy tradeoff between mutation (which counteracts drift in a way  that does not require more time for convergence but makes it impossible to fine tune solutions) and population size (which can fine tune solutions but requires a longer time to converge to a good solution). Selection operators and evolutionary hyper-parameters should be chosen with these pressures and tradeoffs in mind.

Whiteboard notes for this lecture are archived at:
https://www.dropbox.com/scl/fi/09u3vd7l3m9h68pc5uyr9/IEE598-Lecture1D-2024-01-28-The_Four_Forces_of_Evolution_and_The_Drift_Barrier-Notes.pdf?rlkey=u4id8rcd9ou45og4eqxx5fu38&dl=0



Tuesday, January 21, 2025

Lecture 1C (2025-01-21): Population Genetics of Evolutionary Algorithms

In this lecture, we review the fundamentals of evolutionary (population-based) metaheuristic optimization algorithms (for engineering design optimization, EDO). This allows us to introduce the evolutionary notions of selective pressure and drift, two forces that independently drive changes in the frequency of strategies in populations over generations. Before getting into the concrete details of the basic Genetic Algorithm (GA) next time, we used the rest of this lecture to introduce terminology from evolutionary biology and population genetics in life sciences. These terms (chromosomes, genes, alleles, phenotype, characters, traits, fitness) will often be used in discussions of evolutionary algorithms (EA) as well, and so it is useful to have some background in them.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/0v7cr220534e0fsivf803/IEE598-Lecture1C-2024-01-21-Population_Genetics_of_Evolutionary_Algorithms-Notes.pdf?rlkey=o87azad7gjivlbd1ylgzt8s26&dl=0



Thursday, January 16, 2025

Lecture 1B (2025-01-16): Evolutionary Approach to Engineering Design Optimization

In this lecture, we formally introduce the Engineering Design Optimization (EDO) problem and several application spaces where it may apply. We then discuss classical approaches for using computational methods to solve this difficult optimization problem -- including both gradient-based and direct search methods. This allows us to introduce trajectory and local search methods (like tabu search and simulated annealing) and population-based methods (like the genetic algorithm, ant colony optimization, and particle swarm optimization). We then start down the path of exploring evolutionary algorithms, a special (but very large) set of population-based methods. In the next lecture, we will connect this discussion to population genetics and a basic Genetic Algorithm (GA).

The whiteboard notes taken during this lecture can be found at:
https://www.dropbox.com/scl/fi/ifkopelwknhhbj0wuz8ws/IEE598-Lecture1B-2024-01-16-Evolutionary_Approach_to_Engineering_Design_Optimization.pdf?rlkey=3xdr40xw5qjz62uqzrd373hh6&dl=0



Tuesday, January 14, 2025

Lecture 1A (2025-01-14): Introduction to the Course, Its Policies, and Its Motivation

In this first lecture of the semester, I introduce the course and its expectations and policies. After highlighting several sections of the syllabus, we pivot to discussing heuristics, metaheuristics, and hyperheuristics more broadly. This sets up for the next lecture which will introduce Evolutionary Algorithms, for use as metaheuristics.

The whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/7ojfu7w0d58yhsb2zfhjb/IEE598-Lecture1A-2025-01-14-Introduction_to_Course_Policiesi_and_Motivations-Notes.pdf?rlkey=uw6va7dybspd2cvqtq1bui5f6&dl=0



Popular Posts