In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD). This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems.
Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&dl=0