From a41d83f254ea84ec4e99e7f0030f256f427c2ad9 Mon Sep 17 00:00:00 2001 From: Christoforos Spartalis <52541427+cspartalis@users.noreply.github.com> Date: Fri, 20 Mar 2026 18:58:47 +0200 Subject: [PATCH] Add reference to Spartalis et al. papers in README Added a CVPR 2025 main track paper, and an ICML 2025 Workshop paper. --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 24f48ff..7ca1e9d 100644 --- a/README.md +++ b/README.md @@ -58,6 +58,7 @@ | Srivatsan et al. | [STEREO: A Two-Stage Framework for Adversarially Robust Concept Erasing from Text-to-Image Diffusion Models](https://openaccess.thecvf.com/content/CVPR2025/html/Srivatsan_STEREO_A_Two-Stage_Framework_for_Adversarially_Robust_Concept_Erasing_from_CVPR_2025_paper.html)|CVPR| | Lee et al. | [Localized Concept Erasure for Text-to-Image Diffusion Models Using Training-Free Gated Low-Rank Adaptation](https://openaccess.thecvf.com/content/CVPR2025/html/Lee_Localized_Concept_Erasure_for_Text-to-Image_Diffusion_Models_Using_Training-Free_Gated_CVPR_2025_paper.html) | CVPR | |Shirkavand et al. | [Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models](https://openaccess.thecvf.com/content/CVPR2025/html/Shirkavand_Efficient_Fine-Tuning_and_Concept_Suppression_for_Pruned_Diffusion_Models_CVPR_2025_paper.html)|CVPR| +| Spartalis et al. | [LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty](https://cvpr.thecvf.com/virtual/2025/poster/33292) | CVPR | | Pan et al. | [Multi-Objective Large Language Model Unlearning](https://ieeexplore.ieee.org/abstract/document/10889776) | ICASSP | | Wang et al. | [Large Scale Knowledge Washing](https://openreview.net/forum?id=dXCpPgjTtd) | ICLR | |Koulischer et al. |[Dynamic Negative Guidance of Diffusion Models](https://openreview.net/forum?id=6p74UyAdLa)|ICLR| @@ -114,6 +115,7 @@ | Sanga et al. | [Train Once, Forget Precisely: Anchored Optimization for Efficient Post-Hoc Unlearning](https://arxiv.org/abs/2506.14515) | ICML Workshop | | Wu et al. | [Evaluating Deep Unlearning in Large Language Models](https://openreview.net/forum?id=376xPmmHoV) | ICML Workshop | | Spohn et al. | [Align-then-Unlearn: Embedding Alignment for LLM Unlearning](https://arxiv.org/abs/2506.13181) | ICML Workshop | +| Spartalis et al. | [Unleashing Uncertainty: Efficient Machine Unlearning for Generative AI](https://openreview.net/forum?id=rWc530VQE6¬eId=rWc530VQE6) | ICML Workshop | | Dosajh et al. | [Unlearning Factual Knowledge from LLMs Using Adaptive RMU](https://arxiv.org/abs/2506.16548) | SemEval | | Xu et al. | [Unlearning via Model Merging](https://arxiv.org/abs/2503.21088) | SemEval | | Bronec et al. | [Low-Rank Negative Preference Optimization](https://arxiv.org/abs/2503.13690) | SemEval |