Neural Network Design (2nd
Edition),Martin T. Hagan Chapter 4 (Excluding Proof of
convergence).
Neural Network Design (2nd
Edition),Martin T. Hagan Chapter 8 (pages 8-1 to 8-12).
Neural Network Design (2nd
Edition),Martin T. Hagan Chapter 9 (pages 9-1 to 9-10)
What you need to know (General
background).
Python
Numpy:
general concepts and functions
(covered in numpy tutorial)
Vector and Matrix
operations
Operations involving vectors and
matrices
Solving linear equations
Understanding equations of lines and
planes in multi-dimensional space (hyperplanes)
What you need to know
(Textbook)
Neuron Model and Network
Architectures
Single Neuron
Activation functions
Layer of Neurons
Weight Matrixeb
Biases
Perceptron
Perceptron architecture
Decision boundary and its relation to
hyperplanes
Multiple neuron
Perceptron
Performance Surfaces and Optimum
Points
Gradient and Hessian
Taylor Series
Directional Derivatives
Minima and maxima
Sufficient and necessary conditions
for optimality
Performance Optimization
Steepest Descent
Minimizing along a line
What you need to know
(supplementary).
Understanding Computational Graphs
and their forward and backward passes
Loss functions: MSE, MAE, Hinge, and
Cross-entropy
TenosorFlow:
Creating and manipulating
tensors
Creating multi-layer neural
Networks
Calculation of outputs, errors, and
gradients
Training and adjusting weights using
the “GradientTape”
Performance measures
Keras:
Creating multi-layer neural
Networks
Calculation of outputs and
gradients
Training and adjusting
weights
Understanding different loss
functions.
Setting the loss function for each
layer
Understanding the metrics for the
neural networks
Compiling, training, and
evaluating a neural network
Convolutional
Neural Networks (CNN):
Understanding convolutional filters,
padding, and stride.
Creating convolutional, pooling,
flattening, and fully connected layers
Determining the shape of the weight
matrix
Determining the shape of the output
for each layer
Determining the number of
parameters
Autoencoders
Encoder
and decoder
Latent
space
Variational autoencoders
Implementation and training
Generative
Adversarial Network (GAN)
Discriminator and generator
Latent
space
Calculation of loss for discriminator and generator
Implementation and training with tensorflow and keras
Recurrent Neural
Networks (RNN)
Structure of RNN and LSTM
Hidden
state (hidden nodes)
Weight
matrices and calculation of hidden state and output
Training
and adjusting weights (matrix form)
Time
sequences.
Implementation using numpy or tensorflow
Transformers
Multi-Head attention
Feed
forward network structure
Queries,
Keys, and Values
Positional Encoding
Decoder
structure
Masked
multi- head attention
Coding.
There will be questions on the exam
asking you to write or complete a code section in numpy,
tensorflow, or Keras. These questions may relate to parts and
concepts in the assignments.