A standalone Fortran interface to Weights & Biases (wandb or W&B) experiment tracking.
Log training metrics, hyper-parameters, and hyperparameter sweeps directly from Fortran — no Python in your training loop required.
Note: This library has been developed with the assistance of LLM AI code agents and has not yet been thoroughly tested in production.
wandb-fortran embeds the Python interpreter at runtime using the Python C API,
imports wandb, and forwards every Fortran API call to Python. You get the
full wandb feature set (real-time dashboards, sweep search, artifact tracking)
with a clean Fortran interface.
┌──────────────────────────────┐
│ Your Fortran training code │
│ use wf │
│ call wandb_log(...) │
└──────────┬───────────────────┘
│ iso_c_binding
┌──────────▼───────────────────┐
│ wf_wandb_c.c (C bridge) │
│ Embeds Python interpreter │
└──────────┬───────────────────┘
│ Python C API
┌──────────▼───────────────────┐
│ wandb Python package │
│ → wandb server / cloud │
└──────────────────────────────┘
wandb-fortran is distributed with the following directories:
| Directory | Description |
|---|---|
| docs/ | Compilable documentation |
| example/ | A set of example programs utilising the wandb-fortran library |
| src/ | Source code |
| tools/ | Additional shell script tools for environment setup |
| test/ | A set of test programs to check functionality of the library works after compilation |
Tutorials and documentation are provided on the docs website.
Refer to the API Documentation section later in this document to see how to access the API-specific documentation.
The wandb-fortran library can be obtained from the git repository. Use the following commands to get started:
git clone https://github.com/nedtaylor/wandb-fortran.git
cd wandb-fortranThe library has the following dependencies:
| Requirement | Version |
|---|---|
| gfortran (or compatible) | ≥ 13 |
| fpm | ≥ 0.13 |
| Python + dev headers | ≥ 3.8 |
| wandb Python package | ≥ 0.25 |
The library has been developed and tested using the following compilers:
- gfortran -- gcc 15.2.0
To utilise wandb, it must be installed and the necessary paths be made available to the Fortran compiler.
To install wandb, use pip:
pip install wandbNext, if wanting to log data on the wandb website, then a wandb account will be needed and the user needs to be logged in. Note, wandb can be run in offline mode without an account, which will be briefly mentioned later.
To create an account for wandb and set up an API key, follow the quickstart guide provided on the W&B documentation.
Once an account has been set up, log in from the terminal and follow any necessary prompts:
wandb loginNext, the Python flags and paths will need to be configured such that the current shell instance has the right paths pointed to.
Source the setup_env.sh script (note, this requires using the source command, not just executing it):
# If you use conda or a virtualenv, activate it first.
conda activate my_env
source tools/setup_env.sh
# Or force a specific interpreter explicitly.
PYTHON=/opt/homebrew/Caskroom/miniconda/base/envs/my_env/bin/python \
source tools/setup_env.shWhen a conda environment or virtualenv is active, setup_env.sh now prefers that environment's python automatically.
fpm build
# or
./build_fpm.shfpm run --example athena_logging
fpm run --example neural_fortran_loggingWANDB_MODE=offline fpm testwandb-fortran can be used as a dependency in your Fortran project using the Fortran Package Manager (fpm). Before doing so, ensure that the Python environment is set up as described in the previous section.
To use the library, simply add it as a dependency in your fpm.toml:
[dependencies]
wandb-fortran = { git = "https://github.com/nedtaylor/wandb-fortran" }Then source tools/setup_env.sh before running fpm build.
The following is a minimal example of how to use the library in your Fortran code:
use wf
! Initialise a run
call wandb_init(project="my-project", name="experiment-01")
! Log hyper-parameters (shown on the Config panel)
call wandb_config_set("learning_rate", 0.001d0)
call wandb_config_set("epochs", 200)
call wandb_config_set("optimizer", "adam")
! Training loop
do epoch = 1, 200
! ... train ...
call wandb_log("training_loss", train_loss, step=epoch)
call wandb_log("validation_loss", val_loss, step=epoch)
call wandb_log("learning_rate", lr, step=epoch)
end do
! Finish
call wandb_finish()
call wandb_shutdown()API documentation can be generated using FORD (Fortran Documenter). To do so, follow the installation guide on the FORD website to ensure FORD is installed. Once FORD is installed, run the following command in the root directory of the git repository:
ford ford.md
Please note that this project adheres to the Contributing Guide. If you want to contribute to this project, please first read through the guide. If you have any questions, please either discuss then in issues, or contact Ned Taylor.
This work is licensed under an MIT license.
This library has been developed by heavily relying on LLM AI code agents and has not yet been thoroughly tested in production.