Official PyTorch Implementation of MetaUAS: Universal Anomaly Segmentation with One-Prompt Meta-Learning, NeurIPS 2024.
- [2026-02-05]: Added 1-shot MetaUAS results on Real-IAD and Real-IAD-Variety datasets.
| Datasets | Methods | I-AUROC | I-AP | I-F1 | P-AUROC | P-AP | P-F1 | P-AUPRO |
|---|---|---|---|---|---|---|---|---|
| Real-IAD | MetaUAS (1-shot) | 80.0 ± 0.4 | 77.9 ± 0.4 | 72.4 ± 0.4 | 95.6 ± 0.2 | 36.6 ± 1.1 | 39.7 ± 1.0 | 83.5 ± 0.7 |
| Real-IAD-Variety | MetaUAS (1-shot) | 81.9 ± 0.1 | 96.3 ± 0.1 | 94.1 ± 0.0 | 92.0 ± 0.1 | 48.2 ± 0.4 | 48.3 ± 0.3 | 76.5 ± 0.1 |
MetaUAS unifies anomaly segmentation into change segmentation and provides a pure visual foundation model, which requires only one normal image prompt and no additional training, and effectively and efficiently segments any visual anomalies. MetaUAS significantly outperforms most zero-shot, few-shot, and even full-shot anomaly segmentation methods.
You can use our Online Demo to test your custom data for a quick start. Note that the online demo is currently based on CPU. You could also deploy the demo application to your local CPU/GPU server using the following command:
pip install -r requirements.txt
python app.py
bash test.sh
- release pre-trained MetaUAS models
- deploy online MetaUAS Demo
- open testing code
- open training code
If you find this code useful in your research, please consider citing us:
@inproceedings{gao2024metauas,
title = {MetaUAS: Universal Anomaly Segmentation with One-Prompt Meta-Learning},
author = {Gao, Bin-Bin},
booktitle = {Advances in Neural Information Processing Systems},
pages = {39812--39836},
year = {2024}
}




