Thursday, January 15, 2026

Lecture 1B (2026-01-15): Evolutionary Approach to Engineering Design Optimization

In this lecture, we formally introduce the Engineering Design Optimization (EDO) problem and several application spaces where it may apply. We then discuss classical approaches for using computational methods to solve this difficult optimization problem -- including both gradient-based and direct search methods. This allows us to introduce the categories of trajectory and local search methods (like tabu search and simulated annealing) and population-based methods (like the genetic algorithm, ant colony optimization, and particle swarm optimization). We then start down the path of exploring evolutionary algorithms, a special (but very large) set of population-based methods. In the next lecture, we will connect this discussion to population genetics and a basic Genetic Algorithm (GA).

The whiteboard notes taken during this lecture can be found at:
https://www.dropbox.com/scl/fi/kslpzf961mp4viwj557ed/IEE598-Lecture1B-2026-01-15-Evolutionary_Approach_to_Engineering_Design_Optimization-Notes.pdf?rlkey=xb0zoc1h74kbl5m1je7jb1p1c&dl=0



Tuesday, January 13, 2026

Lecture 1A (2026-01-13): Introduction to Course Policies and Motivations

This lecture introduces the main policies of the course and an outline of its content. We close with an introduction to the concepts of heuristics, metaheurisitcs, and hyperheuristics in the context of Engineering Design Optimization specifically and optimization more generally. We hint at the idea that nature provides templates for heuristics at all three levels, and this class aims to understand how these natural systems work and what can be taken from them in the design of heuristics for engineered systems.

Whiteboard note for this lecture can be found at: https://www.dropbox.com/scl/fi/gza807hargomj414wo7fz/IEE598-Lecture1A-2026-01-13-Introduction_to_Course_Policies_and_Motivations-Notes.pdf?rlkey=bl3tx0oa1vbaz79vxahrxipmi&dl=0



Monday, April 28, 2025

Lecture 8A (2025-04-29): Complex Systems Models of Computation – Cellular Automata and Neighbors

This lecture introduces approaches for understanding (and building) computational systems that emerge out of Complex Adaptive Systems (CAS). It first motivates the idea that systems of many, interconnected parts that each are relatively easy to understand in isolation can come together in a system whose network of interactions leads to emergent global phenomena that cannot be predicted from the properties or behaviors of any individual component. We then focus on the role of space in the functions and properties that emerge at a global level. We do this through the example of the Interacting Particle System (IPS) known as the "voter model", which can be viewed as a model for neutral evolution in spatially structured populations. We show that the dual process for the Voter Model is a time-reversed set of coalescing random walkers for which consensus in the model corresponds to whether walkers are sure to coalesce into a single walker in the past of the dual process. This lets us apply Pólya's recurrence theorem and show that consensus is guaranteed (with probability 1) for 1- and 2-dimensional lattices but not guaranteed for lattices of 3 dimensions or higher. This implies that neutral evolution (for example) in a 3D spatial structure may not always lead to fixation on one genotype. We then pivot to introducing Elementary Cellular Automata (ECA) and describe a few rules that demonstrate how they work. We close the regular lecture by connecting CA's back to neural networks (the previous unit) and evolutionary algorithms (the first unit), thus introducing the Cellular Evolutionary Algorithm (cEA). We then extend the lecture a little longer than usual in order to do a demonstration of several ECA's in NetLogo, including a demonstration of how to combine two ECA rules to generate a reliable density classifier.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/gfna3a2aj49fq2a4sovst/IEE598-Lecture8A-2025-04-29-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=mo5jag4axljxrbpnq6wkk9egh&dl=0



Thursday, April 24, 2025

Lecture 7F (2025-04-24): Spiking Neural Networks and Neuromorphic Computation

This lecture explores how real and artificial brains learn using spikes. We begin by reviewing the structure and behavior of spiking neurons, focusing on the Leaky Integrate-and-Fire (LIF) model and the efficiency of sparse, event-driven temporal coding. We then introduce Spike-Timing-Dependent Plasticity (STDP), a biologically inspired learning rule that adjusts synaptic strength based on the relative timing of spikes. From there, we survey major neuromorphic hardware platforms—SpiNNaker, TrueNorth, and Loihi—highlighting their architectural differences and support for learning. We then examine memristor-based crossbar arrays as an analog substrate for STDP, including a case study from Boyn et al. (2017). Finally, we return to Hebbian learning as a conceptual foundation ("fire together, wire together") and explore how simple local decentralized unsupervised Hebbian-like learning rules for conventional ANNs can also produce meaningful clustering behavior. We close with a discussion of future directions, including neuromodulation, synaptic adaptability, and recent research on using sleep-inspired replay to prevent catastrophic forgetting in spiking neural networks.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/8mqjreoitin3qadzk9ofm/IEE598-Lecture7F-2025-04-24-Spiking_Neural_Networks_and_Neuromorphic_Computation-Notes.pdf?rlkey=l83a286aig0fpibafuvofr0hc&dl=0



Tuesday, April 22, 2025

Lecture 7E (2025-04-22): Learning without a Teacher – Unsupervised and Self-Supervised Learning

This lecture covers unsupervised and self-supervised learning, focusing on how both brains and machines discover structure without external labels or rewards (akin to non-associative learning). It begins with examples of unsupervised learning, including clustering, principal component analysis, and autoencoders, and then explores how biological systems like the olfactory pathway in insects organize complex sensory input into compressed, low-dimensional codes. We take a detailed look at the structure of the honeybee brain, examining how floral odors are transformed through the antennal lobe’s glomerular code into organized neural representations. We then transition into self-supervised learning (akin to latent learning) by introducing predictive coding and sensorimotor prediction, highlighting how brains use internal models to anticipate and correct sensory input. Finally, we close by discussing how modern AI systems like GPT (and BERT) leverage self-supervised objectives to build rich internal representations from raw data.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/qwezfleqplmxtiobfpoew/IEE598-Lecture7E-2025-04-22-Learning_without_a_Teacher-Unsupervised_and_Self-Supervised_Learning-Notes.pdf?rlkey=4k5o8j8no3s9x7xc5di676qz3&dl=0





Thursday, April 17, 2025

Lecture 7D (2025-04-17): Reinforcement Learning – Active Learning in Rewarding Environments

In this lecture, we introduce reinforcement learning (RL) with motivations from animal behavior and connections to optimization metaheurisitcs such as Ant Colony Optimization (ACO) and Simulated Annealing (SA). We start by returning to a simple model of pheromone-trail-based foraging by ants (reminiscent of Ant Colony Optimization (ACO)) and formalize the components of the ant action in terms of quality tables for (state, action) pairs, as would be used in RL. We then introduce the quality Q(s,a) function and Q-learning, including two different methods of exploration (epsilon-greedy and softmax) with connections to how different species of ants respond to pheromones. We discuss Deep Q Networks (DQN's), as a connection to neural networks, and then move on to motivating an interpretation of the discount factor using Charnov's Marginal Value Theorem (MVT) of optimal foraging theory (OFT). We close with a discussion of the Matching Law from psychology and how a group of RL agents will converge to a social version of the Matching Law, the Ideal Free Distribution (IFD). Next time, we will cover unsupervised and self-supervised learning, which are approaches where the learning happens even without reward.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/gyux79ukkcs0n7buizfr1/IEE598-Lecture7D-2025-04-17-Reinforcement_Learning-Active_Learning_in_Rewarding_Environments-Notes.pdf?rlkey=ix5qf4a5yz97ppsx97h6sphao&dl=0



Tuesday, April 15, 2025

Lecture 7C (2025-04-15): Recurrent Networks and Temporal Supervision

This lecture focuses on Recurrent Neural Networks (RNNs), which leverage delays within neural networks as storage elements that can be used to make inferences about temporal patterns. We start with an overview of coincidence detectors thought to be used for spatial localization and motion detection in the auditory (Jeffress model) and visual systems (Hassenstein–Reichardt model). This motivates the introduction of Time Delay Neural Networks (TDNNs), which generalize the use of delay lines used in the coincidence-detection circuits. We show how feed-forward TDNNs can be used to identify finite-duration patterns (where the number of neural elements necessary must scale up with the length of the pattern) and draw connections to Finite Impulse Response (FIR) filters. Then we shift to Recurrent Neural Networks and draw analogies to Infinite Impulse Response (IIR) filters that are able to identify patterns over very long durations of input while only using a few neurons (leveraging the implicit memory in the output state(s)). That brings us to Long Short-Term Memory (LSTM) (and the Gated Recurrent Unit, GRU), which is a popular form of RNN that has become less emphasized since the growth in the use of Transformers. We close by showing that randomly weighted, fixed RNN's can be used as "reservoirs" in Echo State Networks as feature extractors that spread out temporal patterns over space, allowing for simple feed-forward decoders (and possibly multiple of them sharing the same reservoir decoder resource) to do complex time-series analysis. These reservoirs can also be instantiated in other dynamical media, such as actual water reservoirs and even materials embedded within soft robots – each of these examples fit within the larger area of "Reservoir Machines" or "Reservoir Computing." Next time, we focus on Reinforcement Learning and its connections to animal foraging.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/u5qjwvqwwb6ok378kw2lm/IEE598-Lecture7C-2025-04-15-Recurrent_Networks_and_Temporal_Supervision-Notes.pdf?rlkey=gqhyq06wdzpw0m6t4fo1hfxqe&dl=0



Popular Posts