This repository is a learning-focused implementation of neural networks from scratch, written in Python with NumPy/SciPy, without using deep learning frameworks such as PyTorch or TensorFlow.
The goal is to understand:
- How neural networks work at a mathematical and algorithmic level
- How these foundations extend naturally to Scientific Machine Learning (SciML) and physics-informed methods
Modern deep learning frameworks hide many important details behind abstractions. This project intentionally avoids those abstractions to:
- implement forward and backward passes manually
- derive and code gradients explicitly
- gain intuition about optimization and stability
This makes the transition to Scientific Machine Learning clearer and more principled.
The repository progresses in stages:
-
Neural Networks from Scratch
- Linear regression
- Logistic regression
- Simple MLPs
- Classification and regression examples
-
Core Training Mechanics
- Backpropagation
- Gradient descent and variants
- Regularization
- Numerical gradient checking
-
Scientific Machine Learning
- Physics-informed neural networks (PINNs)
- Wave propagation problems
- Solving PDEs with neural networks
- Inverse problems (parameter estimation)
All examples are kept as simple as possible to highlight the underlying ideas.
- No PyTorch / TensorFlow / JAX
- Minimal dependencies (NumPy, SciPy)
- Explicit math over convenience
- Readable code over performance
- Educational value first
This repository is for:
- students learning neural networks from first principles
- researchers interested in Scientific Machine Learning
- anyone who wants to understand what deep learning frameworks actually do
Basic knowledge of calculus, linear algebra, and Python is assumed.
.
├── data/ # toy datasets and preprocessing
├── nn/ # neural network components
├── pinns/ # physics-informed models
├── examples/ # runnable scripts
└── README.md