7. Covered Topics
Category
Topic
Key Concepts
Reference
Foundations
Single Neuron Model
Inputs, weights, bias
Ch. 3.1
 
 
 
 
Net value 
Title: TexMaths - Description: 12§display§net=x*w+b§svg§600§FALSE§
Ch. 3.1
 
 
 
 
Activation (transfer) functions
Ch. 3.1
 
 
 
 
Linear vs sigmoid activation
Ch. 3.1
Geometry
Geometric Interpretation
Net value as a hyperplane
Ch. 3.1.1
 
 
 
 
Decision boundaries
Ch. 3.1.1
Regression
Linear Regression
Model formulation
Ch. 3.1
 
 
 
 
Neural networks for regression
Ch. 3.1
Error Metrics
Error Calculation
Sample-wise error
Ch. 3.1.
 
 
Mean Squared Error (MSE)
Title: TexMaths - Description: 12§display§\frac{1}{N}\sum (y-\hat{y})^2§svg§600§FALSE§
Ch. 3.1.
 
 
Mean Absolute Error (MAE)
Title: TexMaths - Description: 12§display§\frac{1}{N}\left\lvert \sum (y-\hat{y})\right\rvert§svg§600§FALSE§
 
 
Training Concepts
Epoch
One full pass over training data
Ch. 3.1
 
 
Numerical Derivatives
Finite / centered difference
Ch. 3.2
Multi-Layer Networks
Multi-layer Neurons
Layered architectures
Ch. 5.1
 
 
Weight Matrices
Matrix-based formulation
Ch. 5.1
 
 
Bias in Weight Matrix
Augmented input representation
Ch. 5.1
Computational Graphs
Forward propagation
Backward propagation
Chain rule
Local derivatives
Ch. 5.3
Loss functions
MSE
MAE
SVM
Creoss entropy
 
 
 
 
Pytorch
Pytorch pipeline
  • Data transforms
  • Datasets
  • Dataloaders
  • Model definition
  • Loss
  • Optimizer
  • Scheduler
  • Training loop
  • Evaluation loop
  • Metrics
  • Plots
  • Saving/loading a checkpoint
https://colab.research.google.com/github/farhadkamangar/CSE5368/blob/master/PyTorch_MNIST_FullyConnected_Pipeline.ipynb#scrollTo=7e1c9a65
Probability concepts
Information
Entropy
Cross Entropy
Kl Divergence
 
 
 
 
Autoencoders
Autoencoder
Variational Autoencoder
Latent variables
Reparameterization