This repository contains code, datasets, and results from the paper:
Kamila Zdybał, James C. Sutherland, Alessandro Parente - Optimizing progress variables for ammonia/hydrogen combustion using encoding-decoding networks, Combustion and Flame, 276:114152, 2025.
You can find the open-source article here: https://www.sciencedirect.com/science/article/pii/S0010218025001907/pdf.
BibTeX citation:
@article{zdybal2025optimizing,
title = {Optimizing progress variables for ammonia/hydrogen combustion using encoding-decoding networks},
author = {Kamila Zdybał and James C. Sutherland and Alessandro Parente},
journal = {Combustion and Flame},
volume = {276},
pages = {114152},
issn = {0010-2180},
year = {2025},
publisher={Elsevier}
}
Data and results files will be shared separately via GoogleDrive as they take over 5GB of space.
- Script for loading data
ammonia-Stagni-load-data.py
We have used Python==3.10.13 and the following versions of all libraries:
pip install numpy==1.26.2
pip install pandas==2.1.3
pip install scipy==1.11.4
pip install scikit-learn==1.3.2
pip install tensorflow==2.15.0
pip install keras==2.15.0You will also need our library PCAfold==2.2.0.
Other requirements are:
pip install matplotlib
pip install plotly
pip install cmcrameriPython scripts stored in scripts/ allow you to train encoders-decoders and assess the optimized PVs.
The scripts will produce results saved as .csv and .h5 files that you can later post-process with dedicated
Jupyter notebooks stored in jupyter-notebooks/.
First, run the PV optimization with RUN-PV-optimization.py with desired parameters.
Once you have the results files, you can run quantitative assessment of PVs with RUN-VarianceData.py.
Both those scripts load the appropriate data under the hood using ammonia-Stagni-load-data.py.
You have a lot of flexibility in setting different ANN hyper-parameters in those two scripts using the argparse Python library.
If you're new to argparse, check out my short video tutorials:
- Master script for running PV optimization
RUN-PV-optimization.py
The above script uses one of the following under the hood:
- QoI-aware encoder-decoder for the
$(f, PV)$ optimizationQoI-aware-ED-f-PV.py - QoI-aware encoder-decoder for the
$(f, PV, \gamma)$ optimizationQoI-aware-ED-f-PV-h.py
depending on which --parameterization you selected.
- Master script for running PV optimization
RUN-VarianceData.py
The above script uses one of the following under the hood:
- Assessment of
$(f, PV)$ parameterizationsVarianceData-f-PV.py - Assessment of
$(f, PV, \gamma)$ parameterizationsVarianceData-f-PV-h.py
depending on which --parameterization you selected.
This is a minimal example for running a Python script with all hyper-parameters set as per §2.2 in the paper:
python RUN-PV-optimization.py --parameterization 'f-PV' --data_type 'SLF' --data_tag 'NH3-H2-air-25perc' --random_seeds_tuple 0 20 --target_variables_indices 0 1 3 5 6 9Alternatively, you can change various parameters (kernel initializer, learning rate, etc.) using the appropriate argument:
python RUN-PV-optimization.py --initializer 'GlorotUniform' --init_lr 0.001 --parameterization 'f-PV' --data_type 'SLF' --data_tag 'NH3-H2-air-25perc' --random_seeds_tuple 0 20 --target_variables_indices 0 1 3 5 6 9If you'd like to remove pure stream components from the PV definition (non-trainable pure streams preprocessing as discussed in §3.4. in the paper) use the flag:
--no-pure_streamsas an extra argument.
To run
--parameterization 'f-PV'To run
--parameterization 'f-PV-h'Note: Logging with Weights & Biases is also possible in the scripts above.
Results generated with the Python scripts described above can be post-processed and visualized
in dedicated Jupyter notebooks stored in jupyter-notebooks/.
You can access the appropriate notebook below:
→ This Jupyter notebook can be used to reproduce Fig. 2 and Fig. 3.
→ This Jupyter notebook can be used to reproduce Fig. 4
→ This Jupyter notebook can be used to reproduce Fig. 5, Fig. 6, and Fig. 7
→ This Jupyter notebook can be used to reproduce Fig. 8, Fig. 9, and Fig. 10
→ This Jupyter notebook can be used to reproduce supplementary Figs. S37-S38.
