Skip to content

BIT-AETAS/DPNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DPNet

Official implementation of the paper:

"How to Form Brain-like Memory in Spiking Neural Networks with the Help of Frequency-Induced Mechanism"

DPNet is a spiking neural network (SNN) that uses a frequency-induced plasticity mechanism to form brain-like memory structures. The network grows sparse synaptic connections through Hebbian co-activation.


Requirements

Python dependencies

pip install -r requirements.txt
Package Version
numpy ≥ 1.21
matplotlib ≥ 3.4
progressbar2 ≥ 4.0
mpi4py ≥ 3.0
torch ≥ 1.10

NEST Simulator (required, manual installation)

DPNet requires NEST 2.20.1 and cannot be installed via pip. You must install it manually before running any training script.

Installation options:

Note: NEST 3.x introduced breaking API changes. DPNet has only been tested with NEST 2.20.1 and is not compatible with NEST 3.x.

After installation, verify NEST is accessible:

python -c "import nest; print(nest.__version__)"
# Expected: 2.20.1

Project Structure

DPNet/
├── train_stage1.py          # Stage 1: unsupervised structural growth
├── train_stage2.py          # Stage 2: supervised learning
├── nest_interface/
│   └── interface_base.py    # Thin NEST wrapper
├── train/
│   ├── my_network.py        # Network assembly (input/memory/output layers)
│   └── my_train.py          # Train class: growth, learning, inference
├── network/
│   ├── layer1/              # Input layer (spike generators → iaf_psc_alpha)
│   ├── layer2/              # Memory layer (3-D Hebbian SNN grid)
│   └── layer3/              # Output layer (classification neurons + teacher)
├── converter/
│   └── image2spike.py       # Image pixel → spike time encoding
├── data/
│   └── data_import/         # MNIST and CIFAR dataset loaders (auto-download)
└── config/                  # Default configuration files

Usage

All scripts must be run from the repository root.

Stage 1 — Unsupervised Structural Growth

Grows sparse Hebbian synapses in the memory layer using FIP/FID plasticity. Saves the resulting model.

# Train on MNIST (default) and save
python train_stage1.py -s --output-dir ./output/mnist

# Train on CIFAR-10
python train_stage1.py -s --dataset cifar --output-dir ./output/cifar10

# Resume from a previously saved model
python train_stage1.py -l -p ./output/mnist/model.net -s --output-dir ./output/mnist

Stage 2 — Supervised Learning + Classifier

Loads a model from Stage 1, runs STDP-based teacher-signal learning (Phase 1), then extracts frozen memory representations and trains a classifier on top (Phase 2).

# Run both phases on MNIST
python train_stage2.py -p ./output/mnist/model.net

# Save the model after Phase 1
python train_stage2.py -p ./output/mnist/model.net -s --output-dir ./output/mnist

Key Hyperparameters

Stage 1 (train_stage1.py)

Flag Default Description
--dataset mnist Dataset: mnist or cifar
--memory-size X Y Z 100 10 2 Memory grid shape
--train-samples N 300 Samples per class
--grow-iters N auto Growth epochs (3 for MNIST, 1 for CIFAR)
--fip-threshold 9 Min activations to classify as FIP
--fid-threshold 3 Max activations to classify as FID
--fip-weight 800.0 Synaptic weight for FIP connections
--hebb-time-th 5.0 Max spike-time gap for Hebbian connection (ms)
--duration 500.0 Simulation duration per sample (ms)

Stage 2 (train_stage2.py)

Flag Default Description
-p / --path required Path to a Stage 1 model file
--learn-iters 5 Teacher-signal STDP epochs
--epochs 500 Classifier training epochs
--lr 1e-4 Adam learning rate
--batch-size 100 Mini-batch size
--hidden-size 1024 Hidden layer width in classifier

Full option reference: python train_stage1.py --help / python train_stage2.py --help


How It Works

Architecture

Data flows through three layers:

  1. Input Layer — pixel values are encoded to spike times using a power-law function (configurable: power, linear, exponent, inverse).
  2. Memory Layer — a 3-D grid of leaky integrate-and-fire neurons. Synapses between co-active neurons grow through Hebbian learning; FIP neurons (high activation) are strengthened, FID neurons (low activation) are weakened and probabilistically pruned.
  3. Output Layer — one neuron per class with a long refractory period (300 ms), driven by teacher spike generators during supervised training.

Training Phases

Phase 1 — Structural Growth (train_stage1.py):
Hebbian co-activation builds a sparse, task-relevant connectivity in the memory layer. Every 10 samples, FIP/FID classification updates synaptic weights and prunes weak connections.

Phase 2 — Weight Training (train_stage2.py):
Memory-to-output STDP is enabled with teacher signals. The teacher timing adapts each iteration based on actual output spike times until convergence or the iteration limit. And a three-layer fully-connected network is trained with Adam and cross-entropy loss in the end.


Model Files

Saved models are plain text files, one synapse per line:

source_id:target_id:weight

Load with -l -p path/to/model.net in train_stage1.py, or -p path/to/model.net in train_stage2.py.


Citation

If you use this code in your research, please cite:

@article{lei2025form,
  title={How to form brain-like memory in spiking neural networks with the help of frequency-induced mechanism},
  author={Lei, Yunlin and Li, Huiqi and Li, Mingrui and Chen, Yaoyu and Zhang, Yu and Jin, Zihui and Yang, Xu},
  journal={Neurocomputing},
  volume={623},
  pages={129361},
  year={2025},
  publisher={Elsevier}
}

About

The code for paper "How to Form Brain-like Memory in Spiking Neural Networks with the Help of Frquency-Induced Mechanism"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages