This project implements a parallel training pipeline for inferring thermodynamic quantities (e.g., stochastic entropy production) from trajectories using neural networks.
It uses:
- MPI (
mpi4py) for parallelization - PyTorch for model training
- YAML for experiment configuration
To run the minimal example
mpirun -n x python fivebeads_main_parallel.py config.yaml where x is the number of the parralel training. For example, run with 4 processes:
mpirun -n 4 python main.py config.yamlThe script requires a YAML config file config.yaml, which looks like
---
base_directory: './testing_mpi/'
simulation:
dt: .01
path_length: 100
init: [0.56,-0.23,0.14,-0.12,0.09,0.87,-0.15,0.08,-0.19,0.92,-0.21,0.11,0.68,-0.17,0.79]
kBT: [1, 2]
mob: 1
k: 1
coarse: 1
coarse_steps: [1,2,3]
# Training options for the model
training:
n_epoch: 10
epoch_s: 8_000
n_iter: 2
iter_s: 4_096
n_infer: 1
infer_s: 2_000
lr: .0001
wd: .00001
patience: 5
min_delta: 0
# Model options
u_model:
n_input: 5
n_hidden: 32
n_output: 5
num_inner: 2
dtlogf_model:
n_input: 5
n_hidden: 32
n_output: 1
num_inner: 2Controls simulation parameters.
-
dt: Simulation time step size -
path_length: Number of simulation steps per trajectory -
init: Initial state vector- Here: the right triangle part of the covariance of the initial Gaussian distribution
$(\Sigma_{11}, ..., \Sigma_{15}, \Sigma_{22}, ..., \Sigma_{25},\Sigma_{33}, ...,\Sigma_{35}, ..., \Sigma_{55})$
- Here: the right triangle part of the covariance of the initial Gaussian distribution
-
kBT: Boltzmann constant * Temperature -
mob: Mobility coefficient (Not used) -
k: Spring constant -
coarse: -
coarse_steps: List of coarse-graining levels
Here, for example, the following data are sent to neural networks:
- 1 → full trajectory
- 2 → every 2 steps
- 3 → every 3 steps
Controls neural network training.
n_epoch: Number of epochs generatedepoch_s: Samples per epoch generatedn_iter: Iterations per training stageiter_s: Batch sizen_infer: Number of inference runs (not used)infer_s: Validation dataset size
lr: Learning ratewd: Weight decay
patience: Stop if no improvement after N stepsmin_delta: Minimum improvement threshold
Neural network for local entropy production
u_model:
n_input: 5
n_hidden: 32
n_output: 5
num_inner: 2n_input: Input dimensionn_hidden: Hidden sizen_output: Output dimensionnum_inner: Number of hidden layers
Neural network for temporal score function
dtlogf_model:
n_input: 5
n_hidden: 32
n_output: 1
num_inner: 2After running the experiment, the results directory will look like:
results/<timestamp>/
├── 1883_rank1/
├── 1889_rank0/
├── config.yaml
├── IDs.json
├── nn_final_diss_cum_coarse_01.npz
├── nn_final_diss_cum_coarse_02.npz
├── nn_final_diss_cum_coarse_03.npz
└── theo_final_diss.npynn_final_diss_cum_coarse_01.npz
nn_final_diss_cum_coarse_02.npz
nn_final_diss_cum_coarse_03.npzEach file corresponds to a coarse-graining level.
np.load(file)Returns:
first_order: cumulative stochastic entropy production (1st-order estimator)second_order: cumulative stochastic entropy production (2nd-order estimator)
More details can be found in the jupyter notebook nn_od_example_notebook.ipynb.
If you use this code, please cite:
Lyu, J., Ray, K. J., & Crutchfield, J. P. (2025).
Learning Stochastic Thermodynamics Directly from Correlation and Trajectory-Fluctuation Currents.
arXiv:2504.19007 (Accepted by PRE)