Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@
| Srivatsan et al. | [STEREO: A Two-Stage Framework for Adversarially Robust Concept Erasing from Text-to-Image Diffusion Models](https://openaccess.thecvf.com/content/CVPR2025/html/Srivatsan_STEREO_A_Two-Stage_Framework_for_Adversarially_Robust_Concept_Erasing_from_CVPR_2025_paper.html)|CVPR|
| Lee et al. | [Localized Concept Erasure for Text-to-Image Diffusion Models Using Training-Free Gated Low-Rank Adaptation](https://openaccess.thecvf.com/content/CVPR2025/html/Lee_Localized_Concept_Erasure_for_Text-to-Image_Diffusion_Models_Using_Training-Free_Gated_CVPR_2025_paper.html) | CVPR |
|Shirkavand et al. | [Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models](https://openaccess.thecvf.com/content/CVPR2025/html/Shirkavand_Efficient_Fine-Tuning_and_Concept_Suppression_for_Pruned_Diffusion_Models_CVPR_2025_paper.html)|CVPR|
| Spartalis et al. | [LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty](https://cvpr.thecvf.com/virtual/2025/poster/33292) | CVPR |
| Pan et al. | [Multi-Objective Large Language Model Unlearning](https://ieeexplore.ieee.org/abstract/document/10889776) | ICASSP |
| Wang et al. | [Large Scale Knowledge Washing](https://openreview.net/forum?id=dXCpPgjTtd) | ICLR |
|Koulischer et al. |[Dynamic Negative Guidance of Diffusion Models](https://openreview.net/forum?id=6p74UyAdLa)|ICLR|
Expand Down Expand Up @@ -114,6 +115,7 @@
| Sanga et al. | [Train Once, Forget Precisely: Anchored Optimization for Efficient Post-Hoc Unlearning](https://arxiv.org/abs/2506.14515) | ICML Workshop |
| Wu et al. | [Evaluating Deep Unlearning in Large Language Models](https://openreview.net/forum?id=376xPmmHoV) | ICML Workshop |
| Spohn et al. | [Align-then-Unlearn: Embedding Alignment for LLM Unlearning](https://arxiv.org/abs/2506.13181) | ICML Workshop |
| Spartalis et al. | [Unleashing Uncertainty: Efficient Machine Unlearning for Generative AI](https://openreview.net/forum?id=rWc530VQE6&noteId=rWc530VQE6) | ICML Workshop |
| Dosajh et al. | [Unlearning Factual Knowledge from LLMs Using Adaptive RMU](https://arxiv.org/abs/2506.16548) | SemEval |
| Xu et al. | [Unlearning via Model Merging](https://arxiv.org/abs/2503.21088) | SemEval |
| Bronec et al. | [Low-Rank Negative Preference Optimization](https://arxiv.org/abs/2503.13690) | SemEval |
Expand Down