Thursday, February 27, 2025

Lecture 4B (2025-02-25): From DGA/PGA to Niching Methods for Multi-Modal Optimization

In this lecture, we start by introducing distributed and parallel genetic algorithms (DGA/PGA) that not only have the potential to leverage parallel hardware resources but, perhaps surprisingly, can improve the performance of canonical genetic algorithms (GA's) through population-genetic effects that turn genetic drift in *meta-populations* into a source of diversity (instead of a force that reduces diversity, as it does in a single population). In particular, we introduce the idea of "shifting-balance theory" from Sewall Wright, which is one of the early models that attempts to understand the effect of limited gene flow on drift and selective pressure in complex population structures. This allows us to revisit the power and value of diversity in population-based evolutionary algorithms for optimization, and we use that to introduce the field of multi-modal optimization. Following this idea, we close with an introduction to niching methods for multi-modal optimization, starting with fitness sharing. We will pick up with this topic in our next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/dqfz3d2rienofebqpa30e/IEE598-Lecture4B-2025-02-25-From_DGA_PGA_to_Niching_Methods_for_Multi_Modal_Optimization-Notes.pdf?rlkey=fknh18uafdqh2ah2lqsfi7lcb&dl=0



Tuesday, February 25, 2025

Lecture 3C/4A (2025-02-25): Pareto Ranking and Moving from Communities to Meta-Populations (DGA, PGA)

In this lecture, we review the connections between Pareto optimality and concepts from both population genetics as well as community ecology. We discuss how "fitness" in multi-objective evolutionary algorithms (MOEA's) is about reducing the "distance" to the Pareto front, establishing a group of "non-dominated solutions" that are "equally fit." Thus, Pareto optimization is not only about maximizing fitness (arriving at the Pareto front) but also about completeness and diversity (i.e., maximizing spread of the non-dominated set across the Pareto front). We then introduce the notion of "Pareto ranking", which helps to formally establish a "fitness" concept that captures the "distance from Pareto front" even in cases where the Pareto front location is not known. We discuss how different notions of fitness come with different selective pressures and remind ourselves that high selective pressure can create challenges for maintaining diversity (and spread across the Pareto front). We then re-introduce fitness scaling (which we first discussed in the context of GA's) and fitness sharing/clearing. These are concepts that maximize the number of "niches" along the Pareto front while reducing the number of solution candidates within each niche. We discuss how NSGA-II and NSGA-III are dominant players in the MOEA space, (and well represented in off-the-shelf tools like gamultiobj in MATLAB and pymoo in Python). We then pivot to introducing the idea of "metapopulations" (populations of populations) and how this framework lets us use genetic drift as a tool for maintaining and exploring diversity (whereas before we only viewed drift as a force of reducing diversity). We will pick up next lecture building on this idea to introduce Distributed and Parallel GA's and tools for multi-modal optimization.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/kmqhx2po0o05c36vnh972/IEE598-Lecture3C_4A-2025-02-25-Pareto_Ranking_and_Moving_from_Communities_to_MetaPopulations-DGA_PGA-Notes.pdf?rlkey=dut1pok8mcusqq3sagljbz9yi&dl=0



Thursday, February 20, 2025

Lecture 3B (2025-02-20): Multi-Objective Genetic Algorithms: Weight/Vector-Based Approaches

In this lecture, we review the Pareto perspective of multi-objective optimization, emphasizing Pareto efficiency and the Pareto front. We draw connections to a blend of population genetics/evolution as well as community ecology (related to niche spaces and co-existence). We introduce Age–Fitness Pareto Optimization (AFPO) as a motivational example, as it uses the diversity preserving aspects of multi-objective optimization to improve the evolvability of single-objective metaheuristics used in evolutionary robotics (although note that in the video, it is said that AFPO maximizes age/generations, but it actually minimizes age/generations; this is fixed in the notes linked below). From there, we describe several classical multi-objective genetic algorithm approaches that either explicitly incorporate weighted combinations of objectives or instead create sub-populations for different objectives and blend individuals from those subpopulations. This motivates the idea of Pareto ranking (a  more modern approach to multi-objective evolutionary algorithms), which we will introduce (along with other diversity-preserving and diversity-enhancing mechanisms) next time.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/nwfytnxrlmqzvdoftdqtc/IEE598-Lecture3B-2025-02-20-Multi_Objective_Genetic_Algorithms-From_Weight_Based_Approaches_to_Pareto_Ranking-Notes.pdf?rlkey=pq81xpd5abfgikq6twsj73yys&dl=0



Tuesday, February 18, 2025

Lecture 3A (2025-02-18): Multi-Criteria Decision Making, Pareto Optimality, and an Introduction to Multi-Objective Evolutionary Algorithms (MOEA's)

In this lecture, we introduce multi-objective optimization as a subset of multi-criteria decision making, and we use a game-theoretic example to motivate the discussion (as game theory is a subset of multi-objective optimization). We define the Nash equilibrium concept and draw connections to multi-agent reinforcement learning. That then lets us pivot to other solution concepts for more general multi-objective problems, such as the knapsack problem. We discuss weighting/scalarization approaches (both convex combinations and Chebyshev scalarization), satisficing approaches (common in dynamic programming examples), and target approaches, and we highlight that each of these still has a subjective degree of freedom remaining. We then motivate Pareto-efficient sets and the Pareto frontier, which is an approach that minimizes all subjectivity by producing a set of possible outcomes (instead of a single "optimal" solution). We then close by introducing multi-objective evolutionary algorithms, whose population structure is naturally suited to solving for (samples of) set-based optima.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/lpb8i4mp93iegdjyccly2/IEE598-Lecture3A-2025-02-18-Multi_Criteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multi_Objective_Evolutionary_Algorithms-Notes.pdf?rlkey=hef5n0e77k3k7i0zqwg8cn0cs&dl=0



Thursday, February 13, 2025

Lecture 2C (2025-02-13): Immunocomputing: Genetic Approaches for Diverse Solution Portfolio

In this lecture, we introduce the field of "Immunocomputing", which studies information-processing mechanisms within the vertebrate immune system that help to maintain a diverse array of solutions to detecting and handling threats. Such dynamical systems must be have low false negative rates (i.e., detect as many threats as possible) while also having low false positive rates (i.e., avoid classifying artifacts of normal operations as being anomalous). We discuss the innate immune system, which is a highly conserved immune system to respond to general threats, and the adaptive/acquired immune system, which has immunological memory of past threats that it can use to deploy highly specific responses to those threats if found in the future. Particularly this second system has inspired Artificial Immune Systems (AIS), including negative selection (inspired by "central tolerance" in the thymus) and clonal selection. Both of these two algorithmic approaches form and maintain a diverse portfolio of threat/anomaly classifiers as opposed to clustering around a single good solution, as is often the case in Genetic Algorithms. This sets us up for our next unit on multi-criteria decision making, and so we close with a basic introduction to game theory/multi-objective optimization and Nash equilibria. We will introduce Pareto equilibria in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/ha6oe1rjwyzbwhnp4nsds/IEE598-Lecture2C-2025-02-13-Immunocomputing-Genetic_Approaches_for_Diverse_Solution_Portfolios-Notes.pdf?rlkey=qsljjcwafo03vrtoyc9u7de5d&dl=0



Tuesday, February 11, 2025

Lecture 2B (2025-02-11): Genetic Programming, Immunocomputing, and Artificial Immune Systems

In this lecture, we introduce Genetic Programming as an alternative to Evolutionary Programming that also is able to leverage the power of crossover (and thus the full Genetic Algorithm) when evolving executable structures, such as program code or decision trees. We start with a discussion of Linear Genetic Programming (LGP) on register-based languages (like assembly language) that can be represented as simple sequences of instructions. We then pivot to discuss tree-based Genetic Programming (GP) that operate on Abstract Syntax Tree (ASTs) representations of more traditional programming code (e.g., Python, C++, etc.). We discuss how crossover is implemented in these AST, and we identify several applications such as genprog which uses GP to do automated software repair and optimization. This sets us up to introduce immunocomputing in the next lecture (which we were unable to get to due to time constraints today) which will show how genetical methods from the immune system can be used to generate and maintain useful diversity for applications such as cyber-security and general anomaly detection.

Whiteboard notes for this lecture are available at:
https://www.dropbox.com/scl/fi/nonw19ergjmkk1vn3pf9g/IEE598-Lecture2B-2025-02-11-Genetic_Programming_Immunocomputing_and_Artificial_Immune_Systems-Notes.pdf?rlkey=bgt9guyhvu5qi79jnshxwycxa&dl=0



Thursday, February 6, 2025

Lecture 2A (2025-02-06): Evolutionary Computing from Optimization to Programming (Mutation, CMA-ES, and Evolutionary Programming)

In this lecture, we transition from the Genetic Algorithm to alternative approaches in Evolutionary Computing, particularly Evolutionary Strategies and Evolutionary Programming. We start by highlighting that the Genetic Algorithm's typical mutation operators are a bit crude in that they use the same mutation policy for every decision variable. Motivated by this, we introduce Evolution Strategies, which are alternative approaches that allow for different mutation rates along different dimensions of the decision space. This lets us introduce the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is a popular global optimization approach that approximates the fitness objective with a multivariate normal sampling distribution. We then transition to more creative directions by showing how Evolutionary Programming, which removes crossover, allows for the evolution of Finite State Machines (FSM's) which are equivalent to computer programs that are automatically designed for a desired function. Next time, we will introduce Genetic Programming, which allows for evolving code directly using a more traditional GA (with crossover), and then we will transition to Immunocomputing, another genetically inspired approach to automated creativity.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/cunoa5zqvx29uj3pm2n8j/IEE598-Lecture2A-2025-02-06-Evolutionary_Computing_from_Optimization_to_Programming-Notes.pdf?rlkey=tpqz2o3u36obp28xf4ipw7se2&dl=0



Tuesday, February 4, 2025

Lecture 1F (2025-02-04): GA Wrap Up – Options for Selection, Crossover, Mutation, and Extensions

In this lecture, we wrap up our discussion of the basic Genetic Algorithm (GA) and its hyper-parameters. We focusing on differences in implementation of the population structure as well as different genetic operators for selection and recombination and mutation (although we will have to finish the last few bits of our mutation discussion in the next lecture).  Throughout the lecture, we emphasize the different roles for selection (and selective pressure), genetic drift, mutation, and even migration to some extent. This emphasizes the "evolution in a drift field" diagram that we introduced several lectures ago. Too much selective pressure can lead to premature convergence on local suboptima, and too little selective pressure can lead to solutions favored randomly due to genetic drift. Too much mutation can eliminate the possibility of fine-tuning solutions, and too little mutation is not enough of a guard against drift and selection purging all diversity from the population. Dynamically altering some of these parameters during the execution of a GA can provide a compromise among these tradeoffs.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/y3r7mum9hjcnr7elh4eww/IEE598-Lecture1F-2025-02-04-GA_Wrap_Up-Options_for_Selection_Crossover_Mutation_and_Extensions-Notes.pdf?rlkey=1uekaz5jwug5467yng4jlxgdi&dl=0



Popular Posts