In this lecture, we wrap-up our coverage of Simulated Annealing (SA) and then shift to a new unit on Swarm Intelligence, with our first topic in that unit being Ant Colony Optimization (ACO). We start the lecture with a description of Monte Carlo sampling, which leverages a Law of Large Numbers to provide a methods for approximating integrals using random sampling from a high-dimensional space. This allows us to introduce the Metropolis–Hastings algorithm, a Markov Chain Monte Carlo approach to sampling from arbitrary distributions (originally the Boltzmann distribution for the Metropolis algorithm, which was purely a Boltzmann sampler). We then show how to use the Mentropolis algorithm within Simulated Annealing, which combines it with an annealing schedule that turns an MCMC sampler into an optimizer that starts out as an explorer and finishes as an exploiter. Simulated Annealing also helps import conceptual frameworks from physics (specifically statistical mechanics) into optimization and even Machine Learning more broadly. After finishing the coverage of SA, we introduce Ant System (AS), an early version of Ant Colony Optimization (ACO), which is a combinatorial optimization metaheuristic based on the trail-laying and recruitment behaviors of some ants. We will conclude ACO next time and move on to other Swarm Intelligence algorithms.
Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/b0wmj4a3lnbgyrs8rmts5/IEE598-Lecture5D_6A-2025-03-27-Simulated_Annealiung_Wrap_Up_and_Distributed_AI_and_Swarm_Intelligence_Part_1_Introduction_to_Ant_Colony_Optimization_ACO-Notes.pdf?rlkey=y49aa1x0oi0k53u5x5tv9kngs&dl=0