In this lecture, we start by reviewing the formal definition of Shannon entropy/information in both is discrete and continuous (differential entropy) forms. We then transition to discussing several different MaxEnt distributions and the constraints that they are associated with. Ultimately, this brings us to the Boltzmann–GIbbs distribution and several applications of it. Throughout the lecture, different interactive demonstrations were used (and can be accessed directly at the links below).
Demonstrations referenced in this lecture can be found at:
Softmax Visualizer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html
MaxEnt Explorer (SDM and NLP): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/maxent/maxent_demo.html
Boltzmann Distribution via Random Exchanges of Conserved Quantity: https://tpavlic.github.io/asu-b]]ioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.html
Beta Distribution Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/beta_spacings.html
Whiteboard notes for this lecture can be found at:
https://www.dropbox.com/scl/fi/zwdrab929yg47jm67vope/IEE598-Lecture5C-2026-03-26-Boltzmann-Gibbs_and_other_MaxEnt_Distributions-Notes.pdf?rlkey=3zka62o08gnw8z38r7lknjsqf&dl=0
No comments:
Post a Comment