Sunday, April 5, 2026

Lecture 6B (2026-04-07): Bacterial Foraging Optimization and Ant Colony Optimization

Closing out the Swarm Intelligence unit, this lecture pivots from Particle Swarm Optimization (PSO) to two examples of stigmergic swarm optimization – Bacterial Foraging Optimization (BFO) and Ant Colony Optimization (ACO). Stigmergy is the act of indirection through modifications of the environment, as in leaving chemical trails or depositing chemical gradients, as opposed to direct communication between one individual and another. BFO solves continuous optimization problems similar to PSO but uses attractants and repellants to modify the environment as opposed to directly informing others of information about discovered solutions. The repellants in BFO along with its reproduction and elimination–dispersal phases help to ensure it searches globally over a space as opposed to the more concentrated search of PSO. ACO also uses chemical coordination, but it is developed for combinatorial optimization problems. Although ACO was originally developed for the Traveling Salesman Problem (TSP), we discuss ACO first in a simpler layered model that better matches the foraging paths of real ants before briefly discussing the application to the TSP. We close with a brief mention of more complex recruitment dynamics in real ants, where trail laying plus noise can provide the ability to track changing feeder distributions and how one-on-one recruitment by some ants and bees can lead to different distributions of recruits across options (similar to changing the temperature in a softmax).

Interactive demonstrations referenced in this lecture can be found at:

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&dl=0



Thursday, April 2, 2026

Lecture 5E/6A (2026-04-04): Parallel Tempering and Swarm Intelligence through Social Cohesion (Particle Swarm Optimization)

In this lecture, we finish our unit on physics-inspired ML and optimization by covering Parallel Tempering (PT), which combines multiple, parallel Metropolis–Hastings MCMC samplers each with different temperatures (rather than using an annealing schedule, as in Simulated Annealing (SA)). We then pivot toward motivating why certain problem sets, like optimizing high-dimensional weights of neural networks, may not be well suited by the optimization metaheuristics discussed so far in the course. We use this as an opportunity to introduce Swarm Intelligence and the Particle Swarm Optimization (PSO) algorithm, which is particularly good at finding and exploring local optima in spaces with many similarly performing local optima. We explore how PSO was inspired by the Boids Model from Craig Reynolds (in computer graphics) and how it overlaps with the Vicsek model (from statistical physics). We also show how PSO really depends on is social information but, under the influence of social information, tends to very quickly purge the diversity in its solution candidates. Online interactive demonstration modules associated with this lecture can be found at:

Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&dl=0



Tuesday, March 31, 2026

Lecture 5D (2026-03-31): Metropolis–Hastings Markov Chain Monte Carlo and Simulated Annealing/Parallel Tempering

In this lecture, we start with a reminder that the Boltzmann–Gibbs distribution is the maximal entropy (MaxEnt) distribution of physical microstates when the average energy is fixed at a temperature at thermal equilibrium. We then move toward motivations where it would be useful to sample microstates from such a distribution. First, we introduce Monte Carlo methods for parameter estimation, and we pivot toward applications of Monte Carlo sampling for numerical integration. This leads us back to physics applications where integration using the Boltzmann–Gibbs is much more practical. This gives the opportunity to introduce Metropolis–Hastings Markov Chain Monte Carlo (MCMC) sampling, which allows for sampling from the Boltzmann–Gibbs and more. After discussing connections to importance sampling (from stochastic simulation) and Bayesian/MCMC statistics, we introduce Simulated Annealing, which combines Metropolis–Hastings sampling with an annealing schedule for temperature. We close with a very brief introduction to Parallel Tempering, which swaps out the annealing schedule for parallel MCMC samplers that periodically swap states based on their relative energies. We will pick up with Parallel Tempering in the next lecture.

On-line simulations referenced in this lecture can be found at:

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&dl=0



Thursday, March 26, 2026

Lecture 5C (2026-03-26): Boltzmann–Gibbs and other Maximum Entropy Distributions

In this lecture, we start by reviewing the formal definition of Shannon entropy/information in both is discrete and continuous (differential entropy) forms. We then transition to discussing several different MaxEnt distributions and the constraints that they are associated with. Ultimately, this brings us to the Boltzmann–GIbbs distribution and several applications of it. Throughout the lecture, different interactive demonstrations were used (and can be accessed directly at the links below).

Demonstrations referenced in this lecture can be found at:

Softmax Visualizer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html

MaxEnt Explorer (SDM and NLP): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/maxent/maxent_demo.html

Boltzmann Distribution via Random Exchanges of Conserved Quantity: https://tpavlic.github.io/asu-b]]ioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.html

Beta Distribution Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/beta_spacings.html

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/zwdrab929yg47jm67vope/IEE598-Lecture5C-2026-03-26-Boltzmann-Gibbs_and_other_MaxEnt_Distributions-Notes.pdf?rlkey=3zka62o08gnw8z38r7lknjsqf&dl=0



Tuesday, March 24, 2026

Lecture 5B (2026-03-24): From Entropy to Maximum Entropy (MaxEnt) Methods

In this lecture, we pivot from our motivation from the Simulated Annealing optimization metaheuristic to thinking about how to sample from microstates within the physically inspired search process. This requires us to introduce the concept of entropy, a quantity which measures the number of microstates in a coarse-grained "macrostate" description of a system. Within the constraints of a system, we seek a distribution of microstates that represents only those constraints and not any additional information. This is the maximal entropy distribution for those constraints. We provide a few formalities on how to make this a little more rigorous and then introduce Maximum Entropy (MaxEnt) methods once popular in NLP that remain to be popular in Species Distribution Modeling and archaeology. We will use MaxEnt to help us define the Boltzmann–Gibbs distribution (and Monte Carlo methods to sample from it) next time.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&dl=0



Thursday, March 19, 2026

Lecture 4D/5A (2026-03-19): Distributed and Parallel GA's and Introduction to Simulated Annealing (SA)

In this lecture, we wrap up our units on evolutionary algorithms, closing on Distributed (Island Model) and Parallel Genetic Algorithms. We describe the basic population structure and migration approaches in Distributed GA's and explore whether Sewall Wright's shifting-balance theory (SBT) can explain DGA's success on certain landscapes. We then pivot to a new unit on physics-inspired ML and optimization approaches, where Simulated Annealing (SA) is one of the key topics. We introduce Simulated Annealing and discuss how hardware annealers can solve a broad set of combinatorial problems that can be QUBO (Quadratic Unconstrained Binary Optimization) encoded. We setup the basic content grammar for the unit by introducing macrostate, microstate, temperature, and energy, and then we give an animated outline of how the basic SA algorithm works. We will use this SA to motivate our explorations into entropy, MaxEnt, Boltzmann sampling, and more in future lectures in this unit.

Shifting-Balance Theory visualizer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html

Simulated Annealing explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&dl=0



Tuesday, March 17, 2026

Lecture 4C (2026-03-17): From Niches to Meta-Populations: Toward Distributed and Parallel Genetic Algorithms (DGA/PGA)

In this lecture, we close out our discussion of "niching" diversity-preservation approaches for multi-modal and multi-objective evolutionary algorithms. We had covered clearing/clustering algorithms in the past lecture (Lecture 4B), and so we start on crowding algorithms, including Restricted Tournament Selection (RTS), briefly the introduce Species Conserving Genetic Algorithm (SCGA), and then close with a discussion of islanding approaches. This sets up an introduction to distributed (and parallel) genetic algorithms, which we will start out with in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&dl=0



Thursday, March 5, 2026

Lecture 4B (2026-03-05): Niching Methods for Diversity Preservation in Multi-Objective and Multi-Modal Evolutionary Algorithms

In this lecture, we cover several of the different "niching methods" used for diversity preservation in both multi-objective and multi-modal evolutionary algorithms. We start with an overall goal to create "negative frequency-dependent selection" (or density dependence) that has the potential to be able to stabilize different subpopulations coexisting with each other. We start by discussing how evolutionary models like Hawk–Dove ("Chicken") have mixed Nash equilibria that can represent stable co-existence of discrete phenotypes (due to negative frequency dependence). But then we pivot to habitat selection models, with particular focus on the Ideal Free Distribution (IFD), as a better match for the diversity-preservation problem in MOEA's and MMEA's. That allows us to introduce "fitness sharing" (which matches very closely to the IFD) and various other fitness-modification methods that each have different computational costs and diversity benefits. We close by introduction selection-based approaches, such as breaking tournament-selection ties by crowding distance.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&dl=0



Tuesday, March 3, 2026

Lecture 3D/4A (2026-03-03): From Multi-Objective to Multi-Modal Optimization

In this lecture, we wrap up our discussion of Pareto ranking for Multi-Objective Evolutionary Algorithms (MOEA's) and then introduce the topic of diversity-preservation methods ("niching" methods) that maintain diversity across the Pareto frontier. We then pivot to introducing Multi-Modal Optimization (MMO), which also requires "niching" methods to populate the different peaks of the optimization objective. We close by starting to set up background that motivates the particular designs of niche-preserving methods.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&dl=0



Thursday, February 26, 2026

Lecture 3C (2026-02-26): Multi-Objective EA’s from Linearization to Pareto Ranking and Beyond

In this lecture, we review the concept of Pareto optimality (Pareto improvements, Pareto efficiency, Pareto-efficient sets of non-dominated solutions, and the Pareto frontier/front) and then start laying the foundations of building multi-objective evolutionary algorithms to find the Pareto front. This starts with introducing historical MOEA's – like WBGA-MO, RWGA, and VEGA – which are all based on a linear scalarization of multi-objective problems. We then show that these methods not only have trouble promoting diversity along the discovered samples of the Pareto frontier, but they completely miss non-convex portions of the Pareto frontier. To address these issues, we introduce Pareto ranking (from SPGA, MOGA, and NSGA) and the general concept of the community ecology of multi-objective optimization (where fitness is inversely proportional to distance to the Pareto frontier, and diversity is maintained in coexisting "niches" along the community of similar-fitness individuals). We will pick up with this idea and transition to multi-modal optimization (and the various diversity-preserving "niching" methods taht enable it) next time.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&dl=0



Tuesday, February 24, 2026

Lecture 3B (2026-02-24): Multi-Objective Optimality and Introduction to Multi-Objective Evolutionary Algorithms

In this lecture, we start with a review of equilibrium and efficiency/dominance concepts from game theory – specifically the Nash equilibrium, Pareto efficiency, and payoff and risk dominance. We apply these for both a discrete game (the Stag Hunt) and a generic continuous game. That allows us to introduce Variational Inequalities as a more general numerical problem set that includes the Nash equilibrium as a member (for continuous games). We then pivot to Multi-Objective Optimization (MOO) and motivate the concept of Pareto improvements, Pareto efficiency, Pareto-efficient sets, and Pareto frontiers/fronts. We close with discussions about scalarization approaches to solve MOO problems, including linear scalarization, targets, satisficing, and Chebyshev/weighted minimax. We discuss problems with these approaches and then hint that we will move forward toward fitness concepts that do not require weighting/scalarization. We will pick up with that point in the next lecture, where we introduce several different forms of Multi-Objective Evolutionary Algorithms (and Pareto ranking).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&dl=0



Thursday, February 19, 2026

Lecture 2D/3A (2026-02-19): From Immunocomputing to Games and Multi-Objective Optimization (MOO)

In this lecture, we start with a description of two major classes of Artificial Immune System strategies – negative selection and clonal selection – along with the biological processes in the acquired/adaptive/specific immune system in vertebrates that inspired these algorithms. We focus on how both approaches maintain useful diversity, and we frame clonal selection as a form of multi-modal optimization (which will be discussed in more detail in Unit 4). This allows us to pivot to multi-objective optimization. In the last section of the lecture, we start outlining fundamentals of thinking about systems with multiple competing objectives – focusing first on game theory and the concept of the Nash equilibrium. Next time, we will define Pareto efficiency and start to introduce classical algorithms for finding Pareto-efficient sets.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/p8ly7l88ouvyc7m60lq8v/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-Notes.pdf?rlkey=yhb60lgihm2mv0w1eid1nxas1&dl=0



Tuesday, February 17, 2026

Lecture 2C (2026-02-17): Genetic Programming and Artificial Immune Systems

In this lecture, we review the core principles of Genetic Programming, starting with Linear Genetic Programming (LGP) and transitioning to tree-based Genetic Programming (GP) that incorporates Abstract Syntax Trees (AST's) as its genotypes. We cover the different mutation operators and selection operators for these forms of GP and typical application spaces that use GP. We then close the lecture with an introduction to Immunocomputing and Artificial Immune Systems (AIS), which mimic the acquired/adaptive/specific immune system of (jawless) vertebrates. We will continue our discussion of immunocomputing/AIS in the next lecture and use it to pivot to multi-objective optimization (the subject of the next unit).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&dl=0





Thursday, February 12, 2026

Lecture 2B (2026-02-12): Evolutionary and Linear Genetic Programming

In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD).  This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&dl=0



Tuesday, February 10, 2026

Lecture 2A (2026-02-10): Evolution Strategies and Covariance Adaptation (ES, NES, CMA-ES)

In this lecture, we introduce a fundamentally different family of evolution-inspired search algorithms, the Evolution Strategies (ES). Rather than treating a population as a set of hypothetical good solutions that must be retained or discarded, as in the GA, the Evolution Strategies adapt the search process itself by allowing different decision variables to be able to mutate using different step sizes, and the resulting adaptive step sizes reflect the curvature of the underlying fitness landscape. We discuss how this heuristic idea was formalized in Natural Evolution Strategies (NES), which leverage the information-theoretic natural gradient to learn productive directions to climb, and then how that was made more practical and effective via Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We close with a discussion of how CMA-ES facilitates adaptive restarts, making CMA-ES not only a good tool for high-resolution search of a single fitness peak but also a candidate for global optimization – seeking out new peaks in a sort of "depth-first" order (in contrast to the "breadth-first" order of the GA). We then put the GA, ES, and conventional (stochastic) gradient descent together as complementary tools for complex optimization.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&dl=0



Thursday, February 5, 2026

Lecture 1H (2026-02-05): Genetic Algorithm (GA) Hyperparameter Tuning

In this lecture, we complete our coverage of the Genetic Algorithm (GA) by discussing how to improve the function of selection operators and, in general, how to tune hyperparameters to improve the performance of the GA for a given problem. We start with a discussion of the effects of Stochastic Uniform Sampling (SUS) over roulette-wheel selection and how the effective drop in variance in number of parents eliminates the fixation-causing effects of drift while also continuing to leave a barrier on precision in place. We also discuss how to use exponential ranking in ranking selection to have better control over selective pressure, but we mention that tournament selection ultimately is a stronger choice computationally when rank-based selection is desired. We discuss a framework that puts the 5 major hyperparameters (M, R, E, Pm, and Pc [as well as selection pressure]) on one graph to help guide choice of different hyperparameters based on context. We draw connections between the two types of selection operator (fitness-proportionate and rank based) and Generalized Linear Modeling (GLM; continuous and ordinal response variables) and discuss connections between the number of parents and the number of samples/statistical power in a GLM. Finally, we close with a brief introduction to Evolution Strategies (ES), which will be the topic we will start with in the next unit.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&dl=0



Tuesday, February 3, 2026

Lecture 1G (2026-02-03): GA Wrap Up – Crossover, Mutation, & Tuning GA Operator Choices

In this lecture, we almost finish our discussion of the canonical Genetic Algorithm (GA) by covering different crossover and mutation operator choices. We discuss how mutation and crossover rates might change over time. We then end by returning to the selection operator to introduce Stochastic Uniform Sampling, a stratified sampling approach that reduce the variance in the number of offspring selected per high-fitness individual without affecting the mean. Next time, we will discuss how the five major hyperparameters and selection pressure work together to determine the effectiveness of the GA for a particular objective. We will also transition to Unit 2, where we will start by introducing ES and CMA-ES.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&dl=0



Thursday, January 29, 2026

Lecture 1F (2026-01-29): Operators of the Genetic Algorithm

In this lecture, we dive deeper into the basic Genetic Algorithm by describing the three major operators in any GA iteration – the selection operator, the crossover operator, and the mutation operator. We describe different forms of selection (fitness proportionate, ranking, and tournament) and how they vary in their ability to control selection pressure. We also discuss several forms of crossover (from single point to multi-point to uniform to taking random linear combinations) and their function as they move individuals around fitness landscapes. We will finish with the mutation operator next time, but that content is also covered in the pre-written slide notes linked below. After discussing the mutation operator and some optimizations of the GA itself, we will transition next to to evolutionary computing/programming.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&dl=0



Tuesday, January 27, 2026

Lecture 1E (2026-01-27): Structure of the Basic Genetic Algorithm

In this lecture, we reveal the basic architecture of the simple GA. We start with defining how to concretely implement chromosomes/genomes, genes, alleles, characters, and traits numerically within an Engineering Design Optimization context. We then move on to a general definition of multi-objective fitness (which we will return to in Unit 3 when we study multi-objective evolutionary algorithms) and show how fitness functions can be scaled not only to meet the assumptions on fitness functions but also to adjust selective pressure as desired. We close with a flowchart of the steps of a basic genetic algorithm, highlighting operators (selection, crossover, and mutation) that we will discuss in detail in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/zdvfjtp88fl8omly7sd6u/IEE598-Lecture1E-2026-01-27-Structure_of_the_Basic_Genetic_Algorithm-Notes.pdf?rlkey=f0t4apfkyj0v9ketqxoy1xew1&dl=0



Thursday, January 22, 2026

Lecture 1D (2025-01-22): The Four Forces of Evolution and The Drift Barrier

In this lecture, we review the four forces of evolution -- mutation, migration/gene flow, genetic drift, and natural selection -- and the contribution that each one makes to either increasing or decreasing variance in a population over time. So, a more complete picture of evolution is a tense combination of these forces, each of them leading to different kinds of effects on the distribution of alleles (strategies) in a population. We discuss the so-called "drift barrier" -- how the tendency for natural selection to produce higher quality solutions is ultimately limited by genetic drift that dominates when populations have low fitness diversity (low selective pressure) -- and we discuss how this sets up a speed–accuracy tradeoff between mutation (which counteracts drift in a way  that does not require more time for convergence but makes it impossible to fine tune solutions) and population size (which can fine tune solutions but requires a longer time to converge to a good solution). Selection operators and evolutionary hyper-parameters should be chosen with these pressures and tradeoffs in mind.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/g4sqsadzh22p2lay05n18/IEE598-Lecture1D-2025-01-22-The_Four_Forces_of_Evolution_and_The_Drift_Barrier-Notes.pdf?rlkey=ky9ol5itw1ipuehdmuhmylqd1&dl=0



Tuesday, January 20, 2026

Lecture 1C (2026-01-20): Population Genetics of Evolutionary Algorithms

In this lecture, we start by reviewing the basics our motivation to solve Engineering Design Optimization problems with evolutionary metaheuristics (a form of population-baed direct search approach). To prepare to introduce the Genetic Algorithm (GA), one of the most well-known Evolutionary Algorithms, we spend most of this lecture covering foundational topics from population and quantitative genetics that will give us the necessary vocabulary for discussing the GA. In particular, we introduce concepts of qualitative and quantitative traits, characters, phenotypes, genes, chromosomes, genomes, and genotypes. We also discuss the "GxE to P" relationship between genotype and phenotype and the connection between phenotype and fitness. We close with a discussion of the four forces of evolution (mutation, gene flow/migration, natural selection, and genetic drift). Next time, we will discuss the constant tension between natural selection and genetic drift (and mutation) and how to manage (and sometimes harness) this tension in an evolutionary metaheuristic.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/0llubbbjkxxconlia235z/IEE598-Lecture1C-2026-01-20-Population_Genetics_of_Evolutionary_Algorithms-Notes.pdf?rlkey=pyc5jxvcmbjfietop5jchgiyp&dl=0



Thursday, January 15, 2026

Lecture 1B (2026-01-15): Evolutionary Approach to Engineering Design Optimization

In this lecture, we formally introduce the Engineering Design Optimization (EDO) problem and several application spaces where it may apply. We then discuss classical approaches for using computational methods to solve this difficult optimization problem -- including both gradient-based and direct search methods. This allows us to introduce the categories of trajectory and local search methods (like tabu search and simulated annealing) and population-based methods (like the genetic algorithm, ant colony optimization, and particle swarm optimization). We then start down the path of exploring evolutionary algorithms, a special (but very large) set of population-based methods. In the next lecture, we will connect this discussion to population genetics and a basic Genetic Algorithm (GA).

The whiteboard notes taken during this lecture can be found at:
https://www.dropbox.com/scl/fi/kslpzf961mp4viwj557ed/IEE598-Lecture1B-2026-01-15-Evolutionary_Approach_to_Engineering_Design_Optimization-Notes.pdf?rlkey=xb0zoc1h74kbl5m1je7jb1p1c&dl=0



Tuesday, January 13, 2026

Lecture 1A (2026-01-13): Introduction to Course Policies and Motivations

This lecture introduces the main policies of the course and an outline of its content. We close with an introduction to the concepts of heuristics, metaheurisitcs, and hyperheuristics in the context of Engineering Design Optimization specifically and optimization more generally. We hint at the idea that nature provides templates for heuristics at all three levels, and this class aims to understand how these natural systems work and what can be taken from them in the design of heuristics for engineered systems.

Whiteboard note for this lecture can be found at: https://www.dropbox.com/scl/fi/gza807hargomj414wo7fz/IEE598-Lecture1A-2026-01-13-Introduction_to_Course_Policies_and_Motivations-Notes.pdf?rlkey=bl3tx0oa1vbaz79vxahrxipmi&dl=0



Popular Posts