Thursday, April 3, 2025

Lecture 6C (2025-04-03): Distributed AI and Swarm Intelligence, Part 3 – Bacterial Foraging Optimization (BFO) and Particle Swarm Optimization (PSO)

In this lecture, we discuss two other examples of Swarm Intelligence applied to the numerical optimization space – Bacterial Foraging Optimization (BFO) and Particle Swarm Optimization (PSO). Both of these make use of a population moving through an unconstrained environment that respond to both their local experience of an optimization objective as well as the positions of others around them. In BFO, which is inspired by the "run-and-tumble" behavior of certain kinds of flagellated bacteria, bacteria attempt to minimize exposure to a surrounding chemical (the optimization objective to minimize) while also responding to attractive and repellant chemicals emitted by all other bacteria (that have an intensity that rolls off with distance from the emitter). Bacteria "run" in a consistent direction either for a specified amount of algorithm steps or when the sensed cost (from both the optimization objective and the attractant and repellant signals) increases, at which point they "tumble" to a new random direction. After the "lifetime" of all bacteria passes, half of them with the most cumulative exposure die and are replaced by clones of the other half. At certain points, bacteria get randomly moved to other parts of the decision space regardless of how effective their current search has been. This process makes BFO computationally costly but very effective at global optimization and even tracking problems (so long as the reference objective to track is changing slowly enough). PSO has a much simpler implementation where each self-propelled particle has a position and a velocity and an "inertia" that prevents rapid changes in heading. Here, each SPP in PSO remembers the position of its best location as well as the best position of every other SPP in the swarm, and these two positions act like attractors that pull the particle to them. Both PSO and BFO are swarm-intelligence algorithms, but neither incorporate the notion of "stigmergy" (the indirect coordination of individuals through modifications and responsiveness to the shared environment). ACO, which was discussed earlier, is an example of stigmergy (as it maintains a matrix of long-lasting chemical depositions that have their own state, independent of the "ants" that deposited them and the ants that respond to them).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/2opklen1vwap6qx1v3vby/IEE598-Lecture6C-2025-04-03-Bacterial_Foraging_Optimization_BFO_and_Particle_Swarm_Optimization_PSO-Notes.pdf?rlkey=tn8ghxsrhxvuip1xq5drd8vte&dl=0



Tuesday, April 1, 2025

Lecture 6B (2025-04-01): Distributed AI and Swarm Intelligence, Part 2 – ACO and Introduction to Bacterial Foraging Optimization (BFO)

This lecture primarily focuses on describing the structure and operation of Ant System (AS), a precursor to the Ant Colon Optimization (ACO) metaheuristic. It discusses the key algorithmic components of the approach and briefly describes the ant trail-laying behavior that inspired it. At the end of the lecture, Bacterial Foraging Optimization (BFO) is introduced. A full treatment of BFO will be covered in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/oreo91lww327dcbtwq6e3/IEE598-Lecture6B-2025-04-01-ACO_and_Introduction_to_Bacterial_Foraging_Optimization_BFO-Notes.pdf?rlkey=66jx4pfv369kbcngicntv3lem&dl=0



Thursday, March 27, 2025

Lecture 5D/6A (2025-03-27): Simulated Annealing Wrap-up and Distributed AI and Swarm Intelligence, Part 1 - Introduction to Ant Colony Optimization (ACO)

In this lecture, we wrap-up our coverage of Simulated Annealing (SA) and then shift to a new unit on Swarm Intelligence, with our first topic in that unit being Ant Colony Optimization (ACO). We start the lecture with a description of Monte Carlo sampling, which leverages a Law of Large Numbers to provide a methods for approximating integrals using random sampling from a high-dimensional space. This allows us to introduce the Metropolis–Hastings algorithm, a Markov Chain Monte Carlo approach to sampling from arbitrary distributions (originally the Boltzmann distribution for the Metropolis algorithm, which was purely a Boltzmann sampler). We then show how to use the Mentropolis algorithm within Simulated Annealing, which combines it with an annealing schedule that turns an MCMC sampler into an optimizer that starts out as an explorer and finishes as an exploiter. Simulated Annealing also helps import conceptual frameworks from physics (specifically statistical mechanics) into optimization and even Machine Learning more broadly. After finishing the coverage of SA, we introduce Ant System (AS), an early version of Ant Colony Optimization (ACO), which is a combinatorial optimization metaheuristic based on the trail-laying and recruitment behaviors of some ants. We will conclude ACO next time and move on to other Swarm Intelligence algorithms.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/b0wmj4a3lnbgyrs8rmts5/IEE598-Lecture5D_6A-2025-03-27-Simulated_Annealiung_Wrap_Up_and_Distributed_AI_and_Swarm_Intelligence_Part_1_Introduction_to_Ant_Colony_Optimization_ACO-Notes.pdf?rlkey=y49aa1x0oi0k53u5x5tv9kngs&dl=0



Tuesday, March 25, 2025

Lecture 5C (2025-03-25): Toward Simulated Annealing: Introduction to Boltzmann Sampling and Monte Carlo Integration

In this lecture, we continue our march toward Simulated Annealing by ensuring that we have the necessary foundations in information theory to support discussing thermodynamics and statistical mechanics. This allows us to introduce Boltzmann sampling, which will be used in Simulated Annealing, and the computational applications from physical chemistry that first inspired the creation of Boltzmann sampling. In particular, we introduce Monte Carlo integration and start to discuss how Boltzmann sampling can greatly reduce the number of samples necessary to estimate macroscopic variables of interest to physicists as they investigate thermodynamic equations of state. This also effectively allows us to introduce Markov Chain Monte Carlo (MCMC) methods, of which the Metropolis algorithm is recognized as the first popular MCMC method (and the foundation of Simulated Annealing).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/0nmyk60h6xtfa335x2bpd/IEE598-Lecture5C-2025-03-25-Toward_Simulated_Annealing-Introduction_to_Boltzmann_sampling_and_Monte_Carlo_integration-Notes.pdf?rlkey=ig1tbmmyenqpt7b6vwtnzbkem&dl=0



Thursday, March 20, 2025

Lecture 5B (2025-03-20): From Maximum Entropy (MaxEnt) Methods Toward Optimization by Simulated Annealing

In this lecture, we review the concept of entropy from physics and introduce information-theoretic connections to the increase in entropy and the erasure of latent information. We use that to motivate why maximum entropy probability distributions are the "least biased" choices given a set of constraints. We give a few examples of maximum entropy distributions and take a little time to introduce Maximum Entropy (MaxEnt) methods in machine learning and data science that have been successful in Natural Language Processing (NLP) as well as many other fields, from archeology to ecology. Equipped with maximum entropy, we introduce the Boltzmann distribution as a key tool in the implementation of Simulated Annealing, and we highlight connections between the Boltzmann distribution and "softmax" (as well as "softmin") from Machine Learning/AI. We will conclude our introduction of Simulated Annealing in the next lecture.

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/dv7gei6rbb0yltqekmhrb/IEE598-Lecture5B-2025-03-20-From_Maximum_Entropy_MaxEnt_Methods_Toward_Optimization_by_Simulated_Annealing-Notes.pdf?rlkey=jaifjaxazwn3u0efui0lthlky&dl=0





Tuesday, March 18, 2025

Lecture 5A (2025-03-18): Introduction to Simulated Annealing and Entropy

This lecture introduces Simulated Annealing (SA) and the physical background from thermodynamics, statistical mechanics, and information theory needed to understand its function. After describing a general outline of SA, we pivot to discussing entropy (in terms of microstates and macrostates) and why systems move toward macrostates (distributions of microstates) with higher entropy. We link this to bias in stochastic modeling, which sets us up to start discussing Maximum Entropy (MaxEnt) methods in the next lecture. We will eventually bring this discussion back to Simulated Annealing after briefly discussing both MaxEnt Methods and Markov Chain Monte Carlo (MCMC) methods first (starting in the next lecture).

Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/dabgveu9swzj6d4dcveza/IEE598-Lecture5A-2025-03-18-Introduction_to_Simulated_Annealing_and_Entropy-Notes.pdf?rlkey=3dhycmnzij3fdd3fejes1wlvk&dl=0



Tuesday, March 4, 2025

Lecture 4C (2025-03-04): Niching Methods in Multi-Modal Optimization

In this lecture, we cover popular niching/niche-preserving methods for multi-modal optimization (which are also used in multi-objective optimization), which allow for multi-modal optimization techniques to find as many local fitness peaks as possible while minimizing representation (and maintaining high selective pressure) within each peak. These niching methods include fitness sharing methods (including variations inspired by k-means clustering), clearing and crowding methods, and the popular restricted tournament selection (RTS). Throughout the lecture, the computational complexity of different methods is highlighted so that at the end of the lecture (in detail in the PDF notes linked below) an assessment of the costs and benefits of each can be compared.

Whiteboard notes for this lecture (which include a list of computational complexities that was not covered in detail during the lecture) are available at:
https://www.dropbox.com/scl/fi/gdtk02v4gy9boxgj0vdbf/IEE598-Lecture4C-2025-03-04-Niching_Methods_in_Multi-Modal_Optimization-Notes.pdf?rlkey=gvfnxo20j9m76d4hfgvyhluzm&dl=0



Popular Posts