This repository provides the implementation of our method for estimating and dynamically updating the parameters of the atmospheric scattering model from a sequence of stereo foggy images.
This work has been accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) in 2025.
- Representation of a local map as a bipartite graph that describes which frames observe which landmarks
- Generation of distance-radiance pairs, where distance is computed using the camera's pose and the landmark's 3D position, and radiance is computed by applying gamma expansion to the intensity value of the landmark's corresponding 2D feature point in the frame
- Parameter estimation via optimisation, whose problem is represented as a hypergraph, where a vertex represents an optimisation variable (or a group of variables), and an edge
represents an observation error.
Note that these figures are illustrative. In reality, a local map typically contains many more landmarks, with each landmark being observed in many more frames. The graphs are thus much larger in practice.
As part of this research, we collected the Stereo Driving in Real Fog (SDIRF) dataset, which was extensively used in our experiments and evaluation.
SDIRF contains high-quality, consecutive stereo frames of real-world foggy driving scenes captured under a wide range of visibility conditions. With the following features, SDIRF is a first-of-its-kind dataset.
- 52 foggy sequences in total: 32 thick fog, 20 thin fog
- Over 40 minutes of footage (>34,000 frames)
- Includes camera photometric parameters calibrated in a controlled lab environment, which are essential for accurate application of the atmospheric scattering model
- Also includes clear-weather counterparts of the same routes recorded under overcast conditions, providing valuable data for companion work in image defogging and depth reconstruction.
Dataset link: [Coming soon]
Our code is built upon ORB-SLAM2.
- We use the stereo ORB-SLAM2 to facilitate multiple observations of the same landmark from a range of known distances. Please follow their instructions to install the required dependencies.
- We use the Ceres Solver (2.1.0) to solve the optimisation problem for estimating the fog parameters. Please follow their instructions for installation.
After installing the dependencies, make sure you are in the root directory of this project and execute:
chmod +x build.sh
./build.shThis will create libORB_SLAM2.so in lib folder and the following two executables in Examples/Stereo folder.
- stereo_vkitti2
- stereo_sdirf
We illustrate how to organise synthetic foggy images (see Section V-A of our paper for the synthesising procedure) and real foggy images for evaluation, using the Virtual KITTI 2 dataset (VKITTI2) and SDIRF, respectively, as examples.
We provide three sample foggy sequences of VKITTI2 that you can download for quick testing.
Download and unzip the file, and place the resulting folders to YourFoggyVKITTI2Folder. so the folder structure is as follows:
YourFoggyVKITTI2Folder
├── Scene01_fog_40_0.7
├── Scene01_fog_60_0.7
└── Scene01_fog_80_0.7
Download and unzip files of foggy sequences of SDIRF, and place the resulting folders to YourSDIRFFolder, so the folder structure is as follows:
YourSDIRFFolder
├── P00_FogThick
├── ...
├── P24_FogThin
├── Q00_FogThick
├── ...
├── Q25_FogThick
└── R00_FogThin
Before running any example, create a results folder where the estimation results will be saved.
mkdir resultsAs an example, execute the following command to run on the foggy sequence of Scene01 of VKITTI2 that has the following fog parameters:
./Examples/Stereo/stereo_vkitti2 ./Vocabulary/ORBvoc.txt ./Examples/Stereo/VKITTI2.yaml YourFoggyVKITTI2Folder/Images/Scene01_fog_40_0.7/frames/gray/The arguments follow the same convention as in ORB-SLAM2.
As an example, execute the following command to run on the P00_FogThick sequence of SDIRF.
./Examples/Stereo/stereo_sdirf ./Vocabulary/ORBvoc.txt ./Examples/Stereo/SDIRF.yaml YourSDIRFFolder/P00_FogThick/images_colour/ G5E10The first three arguments follow the same convention as in ORB-SLAM2. The very last argument is the combination of gain and exposure of the corresponding SDIRF sequence, which can be obtained from Table III in the supplementary material.
Estimation results will be appended to txt files in the results folder on-the-fly. There are three (or four when running a SDIRF sequence) of them:
| File name | Method |
|---|---|
OthersLi-AModeMax-PreservePositiveBetaFalse.txt |
Li's method |
OthersLi-AModeMedian-PreservePositiveBetaTrue.txt |
Li's modified method |
Ours-Stage2-Weightproductthenuniform-IntensityModeraw-Optimiserceres_tight.txt |
Our method |
Ours-Stage2-Weightproductthenuniform-IntensityModeraw-Optimiserceres_tight-WoGc.txt |
Our method without gamma correction, i.e., using intensity (only generated if running a SDIRF sequence) |
In each file, each row corresponds to an update of the fog parameters.
For Li's method and Li's modified method, each row has 6 entries:
- Column 0: Frame index
- Column 1: Keyframe index (from ORB-SLAM2)
- Columns 2 - 5: grayscale colour channel:
$\hat{\beta}$ ,$\hat{A}$ , number of valid estimates of beta to build its histogram, total number of estimates of beta (for example, if there are 3 landmarks, the first observed by 4 frames, the second by 5 frames, and the third by 6 frames, then the total number of estimates of beta is 4-choose-2 + 5-choose-2 + 6-choose-2 = 6 + 10 + 15 = 31)
For our method, each row has 7 entries:
- Column 0: Frame index
- Column 1: Keyframe index (from ORB-SLAM2)
- Columns 2 - 6: grayscale colour channel:
$\hat{\beta}$ ,$\hat{A}$ , number of landmarks (in the 2nd optimisation stage), number of observations (in the 2nd optimisation stage), number of iterations to converge (in the 2nd optimisation stage)
For Li's method and Li's modified method, each row has 18 entries:
- Column 0: Frame index
- Column 1: Keyframe index (from ORB-SLAM2)
- Columns 2 - 5: blue colour channel:
$\hat{\beta}$ ,$\hat{L}_\infty$ , number of valid estimates of beta to build its histogram, total number of estimates of beta - Columns 6 - 9: green colour channel: same as above
- Columns 10 - 13: red colour channel: same as above
- Columns 14 - 17: grayscale: same as above
For our method, each row has 22 entries.
- Column 0: Frame index
- Column 1: Keyframe index (from ORB-SLAM2)
- Columns 2 - 6: blue colour channel:
$\hat{\beta}$ ,$\hat{L}_\infty$ (or$\hat{A}$ when using intensity), number of landmarks (in the 2nd optimisation stage), number of observations (in the 2nd optimisation stage), number of iterations to converge (in the 2nd optimisation stage) - Columns 7 - 11: green colour channel: same as above
- Columns 12 - 16: red colour channel: same as above
- Columns 17 - 21: grayscale: same as above
We use Python for visualisation.
Make sure you have installed the required packages pandas and matplotlib.
The Python script visualisation/plot_beta_vs_frame_vkitti2.py is provided to visualise the results by plotting
For example, executing the following command on the result files generated by running the executable on the sequence Scene01_fog_40_0.7 will create ./visualisation/beta_vs_frame_VKITTI2_Scene01_fog_40_0.7.pdf, which should reproduce (may not exactly because ORB-SLAM2 is multi-threaded and each run can generate slightly different key frames and landmarks) the top of Fig. 7(a) in the supplementary material.
python3 ./visualisation/plot_beta_vs_frame_vkitti2.py --result_path ./results --sequence Scene01_fog_40_0.7 --output_path ./visualisation The Python script visualisation/plot_beta_vs_frame_sdirf.py is provided to visualise the results by plotting
For example, executing the following command on the result files generated by running the executable on the sequence P11_FogThin will create ./visualisation/beta_vs_frame_SDIRF_P11_FigThin.pdf, which should reproduce Fig. 12(b) in the paper.
python3 ./visualisation/plot_beta_vs_frame_sdirf.py --result_path ./results --sequence P11_FogThin --output_path ./visualisationBesides ORB-SLAM2, we also used the following code mainly for generating baseline results. We thank the authors for making their code open source.
We adapted this implementation to radiance images and used it to generate the atmospheric light estimates reported under the caption “Berman’s” in Fig. 13 of the paper.
We adapted this implementation to radiance images and used it to generate the atmospheric light estimates reported under the captions “Li’s” and “Li’s mod” in Fig. 13 of the paper. We also used it to generate the defogged images in the top four rows of Fig. 14 of the paper with known corresponding atmospheric light that is shown in Fig. 13.
Please consider citing our work if you find it useful for your research.
@article{ding2025estimating,
title={Estimating Fog Parameters From a Sequence of Stereo Images},
author={Ding, Yining and Mota, João F. C. and Wallace, Andrew M. and Wang, Sen},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2026},
volume={48},
number={3},
pages={2154-2169},
publisher={IEEE}
doi={10.1109/TPAMI.2025.3626275}
}This project is released under GPL-3.0.
See the LICENSE file for details.
- Add code
- Add links to the paper
- Add citation section
- Release the SDIRF dataset




