This repository contains the Python code used for creating hyperspectral preprocessing pipelines in the LWDA 2023 publication
Preprocessing Ground-Based Hyperspectral Image Data for Improving CNN-based Classification
by Andreas Schliebitz et al. For citing this work, please see Citing HIPP.
This project uses poetry for managing Python dependencies (see pyproject.toml). As the code was only tested using Python 3.8 in conjunction with dependencies that were current at that time, a Dockerfile with python:3.8 is provided as execution environment.
This installation method is intended for users, who have Python 3.8 natively installed on their systems. As time progresses, more and more users will update to newer and therefore untested Python versions. If you don't have Python 3.8 installed, you can use the the Dockerfile method below.
-
Install Poetry:
curl -sSL https://install.python-poetry.org | python3 - -
Create and activate a virtual environment:
poetry shell
-
Install the requirements:
poetry lock poetry install
If Python 3.8 is not natively installed on your system, you can use the provided Dockerfile to create and run preprocessing pipelines using HIPP in a tested environment:
-
Build the Docker image:
docker build -t hipp . -
Instantiate the image and run
example.pyon the example hypercube inside of data using a dockerized environment:docker run --rm -it --name hipp -v ./data:/workspace/data hipp
In order to run your own HIPP code inside of Docker, simply change the ENTRYPOINT of the Dockerfile and mount your own datasets as Docker volumes.
The code in this repository is released under the Creative Commons CC BY 4.0 License. See LICENSE for additional details.
If you find this repository useful, please consider citing it in your work:
@inproceedings{Schliebitz2023,
title={Preprocessing Ground-Based Hyperspectral Image Data for Improving CNN-based Classification},
author={Andreas Schliebitz and Heiko Tapken and Martin Atzm{\"u}ller},
booktitle={Lernen, Wissen, Daten, Analysen},
year={2023},
url={https://ceur-ws.org/Vol-3630/LWDA2023-paper35.pdf}
}