In this lecture, we complete our discussion of how backpropagation allows for gradient descent to train deep neural networks (feed-forward, multi-layer perceptrons in general). We then pivot to talking about more regularized feed-forward neural networks, like convolutional neural networks (CNNs) that combine convolutional layers with pooling layers and thereby simplify training while producing a low-dimensional feature set (relative to the dimensions of the input). After a brief discussion of the feed-forward architecture of the insect/pancrustacean brain, we shift to discussing time-delay neural networks (TDNNs) as an entry point into discussing recurrent neural networks (RNNs) and reservoir machines, which we will pick up on next time.
Archived lectures from graduate course on nature-inspired metaheuristics given at Arizona State University by Ted Pavlic
Tuesday, April 5, 2022
Lecture 7D (2022-04-05): CNNs, Insect Brains, More Complex Neural Networks (TDNNs and RNNs)
Labels:
podcast
Location:
Tempe, AZ, USA
Subscribe to:
Post Comments (Atom)
Popular Posts
-
Today's lecture introduces the course, its structures, and its policies. Then a basic introduction to metaheuristics is given, leading ...
-
This lecture briefly re-introduces the Pareto-efficient set and Pareto frontier and then describes different early GA-inspired approaches t...
-
In this lecture, we review Elementary Cellular Automata (ECA's) and some classic rules and how to interpret them. We then transition to ...
-
This 20-minute segment provides an overview of the format and expectations for the final exam for the Spring 2022 section of IEE/CSE 598 (Bi...
-
In this lecture, we start with an outline of the Metropolis–Hastings (MH) Markov Chain Monte Carlo (MCMC) sampling algorithm and describe h...
-
In this lecture, we continue our discussion of algorithms in the category of swarm intelligence. We start with a recap of Ant System (AS) –...
-
In this first lecture of the semester, I introduce the course and its expectations and policies. After highlighting several sections of the ...
-
This lecture continues our introduction to Swarm Intelligence algorithms, with a major focus on Particle Swarm Optimization (PSO) and the m...
-
In this lecture, we continue discussing associative/Hebbian learning in neural networks – starting with the inspiration from real neurons a...
-
This lectures introduces the very basics of neural networks, from a description of a basic neuron from biology to the simple single-layer p...
No comments:
Post a Comment