Official implementation of the paper:
"How to Form Brain-like Memory in Spiking Neural Networks with the Help of Frequency-Induced Mechanism"
DPNet is a spiking neural network (SNN) that uses a frequency-induced plasticity mechanism to form brain-like memory structures. The network grows sparse synaptic connections through Hebbian co-activation.
pip install -r requirements.txt| Package | Version |
|---|---|
| numpy | ≥ 1.21 |
| matplotlib | ≥ 3.4 |
| progressbar2 | ≥ 4.0 |
| mpi4py | ≥ 3.0 |
| torch | ≥ 1.10 |
DPNet requires NEST 2.20.1 and cannot be installed via pip. You must install it manually before running any training script.
Installation options:
- recommended: Follow the official build guide at https://nest-simulator.readthedocs.io/en/v2.20.1/installation/linux_install.html
Note: NEST 3.x introduced breaking API changes. DPNet has only been tested with NEST 2.20.1 and is not compatible with NEST 3.x.
After installation, verify NEST is accessible:
python -c "import nest; print(nest.__version__)"
# Expected: 2.20.1DPNet/
├── train_stage1.py # Stage 1: unsupervised structural growth
├── train_stage2.py # Stage 2: supervised learning
├── nest_interface/
│ └── interface_base.py # Thin NEST wrapper
├── train/
│ ├── my_network.py # Network assembly (input/memory/output layers)
│ └── my_train.py # Train class: growth, learning, inference
├── network/
│ ├── layer1/ # Input layer (spike generators → iaf_psc_alpha)
│ ├── layer2/ # Memory layer (3-D Hebbian SNN grid)
│ └── layer3/ # Output layer (classification neurons + teacher)
├── converter/
│ └── image2spike.py # Image pixel → spike time encoding
├── data/
│ └── data_import/ # MNIST and CIFAR dataset loaders (auto-download)
└── config/ # Default configuration files
All scripts must be run from the repository root.
Grows sparse Hebbian synapses in the memory layer using FIP/FID plasticity. Saves the resulting model.
# Train on MNIST (default) and save
python train_stage1.py -s --output-dir ./output/mnist
# Train on CIFAR-10
python train_stage1.py -s --dataset cifar --output-dir ./output/cifar10
# Resume from a previously saved model
python train_stage1.py -l -p ./output/mnist/model.net -s --output-dir ./output/mnistLoads a model from Stage 1, runs STDP-based teacher-signal learning (Phase 1), then extracts frozen memory representations and trains a classifier on top (Phase 2).
# Run both phases on MNIST
python train_stage2.py -p ./output/mnist/model.net
# Save the model after Phase 1
python train_stage2.py -p ./output/mnist/model.net -s --output-dir ./output/mnist| Flag | Default | Description |
|---|---|---|
--dataset |
mnist |
Dataset: mnist or cifar |
--memory-size X Y Z |
100 10 2 |
Memory grid shape |
--train-samples N |
300 |
Samples per class |
--grow-iters N |
auto | Growth epochs (3 for MNIST, 1 for CIFAR) |
--fip-threshold |
9 |
Min activations to classify as FIP |
--fid-threshold |
3 |
Max activations to classify as FID |
--fip-weight |
800.0 |
Synaptic weight for FIP connections |
--hebb-time-th |
5.0 |
Max spike-time gap for Hebbian connection (ms) |
--duration |
500.0 |
Simulation duration per sample (ms) |
| Flag | Default | Description |
|---|---|---|
-p / --path |
required | Path to a Stage 1 model file |
--learn-iters |
5 |
Teacher-signal STDP epochs |
--epochs |
500 |
Classifier training epochs |
--lr |
1e-4 |
Adam learning rate |
--batch-size |
100 |
Mini-batch size |
--hidden-size |
1024 |
Hidden layer width in classifier |
Full option reference: python train_stage1.py --help / python train_stage2.py --help
Data flows through three layers:
- Input Layer — pixel values are encoded to spike times using a power-law function (configurable:
power,linear,exponent,inverse). - Memory Layer — a 3-D grid of leaky integrate-and-fire neurons. Synapses between co-active neurons grow through Hebbian learning; FIP neurons (high activation) are strengthened, FID neurons (low activation) are weakened and probabilistically pruned.
- Output Layer — one neuron per class with a long refractory period (300 ms), driven by teacher spike generators during supervised training.
Phase 1 — Structural Growth (train_stage1.py):
Hebbian co-activation builds a sparse, task-relevant connectivity in the memory layer. Every 10 samples, FIP/FID classification updates synaptic weights and prunes weak connections.
Phase 2 — Weight Training (train_stage2.py):
Memory-to-output STDP is enabled with teacher signals. The teacher timing adapts each iteration based on actual output spike times until convergence or the iteration limit. And a three-layer fully-connected network is trained with Adam and cross-entropy loss in the end.
Saved models are plain text files, one synapse per line:
source_id:target_id:weight
Load with -l -p path/to/model.net in train_stage1.py, or -p path/to/model.net in train_stage2.py.
If you use this code in your research, please cite:
@article{lei2025form,
title={How to form brain-like memory in spiking neural networks with the help of frequency-induced mechanism},
author={Lei, Yunlin and Li, Huiqi and Li, Mingrui and Chen, Yaoyu and Zhang, Yu and Jin, Zihui and Yang, Xu},
journal={Neurocomputing},
volume={623},
pages={129361},
year={2025},
publisher={Elsevier}
}