Skip to content

AEL-Lab/GELAN-ViT-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GELAN-ViT-Models

GELAN-ViT-Models is a repository for GELAN-ViT models tailored for satellite object detection tasks. The base of the code for GELAN-ViT comes from the YOLOv9 repository. Additionally, this repository includes the dataset handler for running KD-YOLOX-ViT on the Satellite Object Detection (SOD) dataset.

Research Paper Citation:

The models are discussed in our research paper:

  • Wenxuan Zhang and Peng Hu, "Sensing for Space Safety and Sustainability: A Deep Learning Approach With Vision Transformers," in the 12th Annual IEEE International Conference on Wireless for Space and Extreme Environments (WiSEE 2024), 16-18 December 2024, Daytona Beach, FL, USA.

GELAN-ViT

Installation

Follow the steps below to install the required dependencies:

git clone git@github.com:AEL-Lab/GELAN-ViT-Models.git
cd GELAN-ViT-Models/GELAN-ViT
pip install -r requirements.txt

Training

To train a GELAN-ViT model on your machine or custom dataset, use the following command:

python train.py --workers 8 --device 0 --batch 32 --data path/to/data.yaml --img 640 --cfg models/detect/GELAN-ViT.yaml --weights '' --name gelan-vit --hyp hyp.scratch-adj.yaml --epochs 500

Inference

To perform object detection using a trained GELAN-ViT model, use the following command:

# inference converted yolov9 models
python detect.py --source './data/images/horses.jpg' --img 640 --device 0 --weights path/to/weights.pt --name gelan_vit_640_detect

Validation

To evaluate the performance of your trained model, use the following command:

python val.py --batch 8 --weights path/to/weights.pt --data path/to/data.yaml  --workers 6 --save-json

Model Structures

  • Primary Models: Located in GELAN-ViT/models/detect, these models integrate ViT into the head of GELAN.
  • Alternate Models: Located in GELAN-ViT/models/detect/alternate, providing an alternative implementation where ViT is appended to GELAN's head. Both versions offer identical performance.

Dataset

The SOD dataset used is available in the separate repo at https://github.com/AEL-Lab/satellite-object-detection-dataset.git.

YOLOX

We also provide dataset handlers for running KD-YOLOX-ViT on the SOD dataset.

Install

Follow the steps below to install the required dependencies:

cd GELAN-ViT-Models/KD-YOLOX-ViT
pip install -r requirements.txt
pip install -v -e .

Ensure you update the following paths in the experiment configuration files before running the training:

  • Replace path/to/dataset.yaml with the actual path to your dataset configuration file.
  • Replace path/to/train/dataset with the actual path to your training dataset.
  • Replace path/to/validation/dataset with the actual path to your validation dataset.

Train

To train a KD-YOLOX-ViT model with SOD dataset, use the following command:

python tools/train.py -f exps/example/sodd/yolox_s_vit_sod.py -b 8 --fp16

Validation

To evaluate the performance of your trained model, use the following command:

python3 tools/eval.py --speed -f exps/example/sodd/yolox_s_vit_sod.py -b 8 --fp16

Licensing

This repository contains code under two different licenses:

Please refer to the LICENSE file in each folder for detailed terms.

Acknowledgements

Expand

About

GELAN-ViT models for satellite object detection tasks for the IEEE WiSEE paper "Sensing for Space Safety and Sustainability: A Deep Learning Approach With Vision Transformers"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors