This repository includes Jupyter Notebooks to demonstrate the basic math and mechanism of deep neural networks.
This Notebook includes a bare back simple Numpy implementation of a neural network. It includes:
- Forward propagation
- Cost compute
- Backpropagation
- Parameter update
This Notebooks ilustrates math mechanism for different activation funcition and their respective derivatives. It includes:
- Sigmoid
- Tanh
- Hard tanh
- ReLU
- Leaky ReLU
- ELU
- GELU
This Notebook ilustrates how the cummulative weighted average of data can help to smooth the shape of a function, using a smoothing parameter beta. This notion serves as an introduction to a wide range of optimization methods that use this technique thoroughly.
This Notebooks ilustrates different optimization methods applied to Neural Networks
- Momentum
- Adagrad
- Adadelta
- RMSProp
- ADAM