Home
|
Table of Contents
|
Overview Map
Neural Networks Fall 2024
Farhad Kamangar
kamangar@uta.edu
Home
>
Topics for Exam 2
Previous
Next
Topics for Exam 2
Topics for Exam 2
From Textbook:
Neural Network Design (2nd Edition),Martin T. Hagan Chapter 4 (Excluding Proof of convergence).
Neural Network Design (2nd Edition),Martin T. Hagan Chapter 8 (pages 8-1 to 8-12).
Neural Network Design (2nd Edition),Martin T. Hagan Chapter 9 (pages 9-1 to 9-10)
What you need to know (General background).
Python
Numpy:
general concepts and functions (covered in numpy tutorial)
Vector and Matrix operations
Operations involving vectors and matrices
Solving linear equations
Understanding equations of lines and planes in multi-dimensional space (hyperplanes)
What you need to know (Textbook)
Neuron Model and Network Architectures
Single Neuron
Activation functions
Layer of Neurons
Weight Matrixeb
Biases
Perceptron
Perceptron architecture
Decision boundary and its relation to hyperplanes
Multiple neuron Perceptron
Performance Surfaces and Optimum Points
Gradient and Hessian
Taylor Series
Directional Derivatives
Minima and maxima
Sufficient and necessary conditions for optimality
Performance Optimization
Steepest Descent
Minimizing along a line
What you need to know (supplementary).
Understanding Computational Graphs and their forward and backward passes
Loss functions: MSE, MAE, Hinge, and Cross-entropy
PyTorch:
Creating and manipulating tensors
Creating multi-layer neural Networks
Calculation of outputs, errors, and gradients
Training and adjusting weights
Performance measures
Convolutional Neural Networks (CNN):
Understanding convolutional filters, padding, and stride.
Creating convolutional, pooling, flattening, and fully connected layers
Determining the shape of the weight matrix
Determining the shape of the output for each layer
Determining the number of parameters
Autoencoders
Encoder and decoder
Latent space
Variational autoencoders
Implementation and training
Generative Adversarial Network (GAN)
Discriminator and generator
Latent space
Calculation of loss for discriminator and generator
Implementation and training with numpy and PyTorch
Recurrent Neural Networks (RNN)
Structure of RNN and LSTM
Hidden state (hidden nodes)
Weight matrices and calculation of hidden state and output
Training and adjusting weights (matrix form)
Time sequences.
Implementation using numpy or PyTorch
Transformers
Multi-Head attention
Feed forward network structure
Queries, Keys, and Values
Positional Encoding
Decoder structure
Masked multi- head attention
Stable Diffusion
Overview of diffusion models: Forward and reverse diffusion processes.
Adding and progressively denoising Gaussian noise.
The role of Gaussian noise in generating latent representations.
Sampling techniques for generating outputs from noise.
Coding.
Write or complete code related to:
Numpy or PyTorch.
Topics covered in lectures and assignments (including CNNs, RNNs, GANs, etc.).
Questions will directly test your ability to implement core concepts and structures.
Top
Farhad Kamangar
Last updated:
2024-11-26