Skip to content

MaxSob/NeuralNetworks_In_5_minutes

Repository files navigation

Codacy Badge

Neural Networks Notebooks

This repository includes Jupyter Notebooks to demonstrate the basic math and mechanism of deep neural networks.

Neural Networks in 5 minutes

This Notebook includes a bare back simple Numpy implementation of a neural network. It includes:

  • Forward propagation
  • Cost compute
  • Backpropagation
  • Parameter update

Activation function and derivatives

This Notebooks ilustrates math mechanism for different activation funcition and their respective derivatives. It includes:

  • Sigmoid
  • Tanh
  • Hard tanh
  • ReLU
  • Leaky ReLU
  • ELU
  • GELU

Exponentially decaying averages

This Notebook ilustrates how the cummulative weighted average of data can help to smooth the shape of a function, using a smoothing parameter beta. This notion serves as an introduction to a wide range of optimization methods that use this technique thoroughly.

Optimization methods

This Notebooks ilustrates different optimization methods applied to Neural Networks

  • Momentum
  • Adagrad
  • Adadelta
  • RMSProp
  • ADAM

About

This repository includes a Jupyter Notebook to demonstrate the basic math and mechanism of deep neural networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors