LoReUn: Data Itself Implicitly Provides Cues to Improve Machine Unlearning
- URL: http://arxiv.org/abs/2507.22499v1
- Date: Wed, 30 Jul 2025 09:12:25 GMT
- Title: LoReUn: Data Itself Implicitly Provides Cues to Improve Machine Unlearning
- Authors: Xiang Li, Qianli Shen, Haonan Wang, Kenji Kawaguchi,
- Abstract summary: Loss-based Reweighting Unlearning (LoReUn) is a plug-and-play strategy that dynamically reweights data during the unlearning process with minimal additional computational overhead.<n>Our approach significantly reduces the gap between existing MU methods and exact unlearning in both image classification and generation tasks.
- Score: 33.62466543549043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent generative models face significant risks of producing harmful content, which has underscored the importance of machine unlearning (MU) as a critical technique for eliminating the influence of undesired data. However, existing MU methods typically assign the same weight to all data to be forgotten, which makes it difficult to effectively forget certain data that is harder to unlearn than others. In this paper, we empirically demonstrate that the loss of data itself can implicitly reflect its varying difficulty. Building on this insight, we introduce Loss-based Reweighting Unlearning (LoReUn), a simple yet effective plug-and-play strategy that dynamically reweights data during the unlearning process with minimal additional computational overhead. Our approach significantly reduces the gap between existing MU methods and exact unlearning in both image classification and generation tasks, effectively enhancing the prevention of harmful content generation in text-to-image diffusion models.
Related papers
- IMU: Influence-guided Machine Unlearning [12.87795856802456]
Machine unlearning (MU) enables models to selectively forget specific data points upon request.<n>Most existing MU algorithms require partial or full fine-tuning on the retain set.<n>We propose Influence-guided Machine Unlearning (IMU), a simple yet effective method that conducts MU using only the forget set.
arXiv Detail & Related papers (2025-08-03T07:00:28Z) - Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - UNO: Unlearning via Orthogonalization in Generative models [0.0]
We show that our algorithms are able to forget data while maintaining the fidelity of the original model.<n>Using MNIST and CelebA data, we demonstrate that our algorithms achieve orders of magnitude faster unlearning times than their predecessors.
arXiv Detail & Related papers (2025-06-05T07:37:02Z) - Adversarial Mixup Unlearning [16.89710766008491]
We introduce a novel approach that regularizes the unlearning process by utilizing synthesized mixup samples.<n>At the core of our approach is a generator-unlearner framework, MixUnlearn.<n>We show that our method significantly outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2025-02-14T16:50:33Z) - RESTOR: Knowledge Recovery in Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can contain private or sensitive information.<n>Several machine unlearning algorithms have been proposed to eliminate the effect of such datapoints.<n>We propose the RESTOR framework for machine unlearning evaluation.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.<n>A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.<n>A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.<n>An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation [30.168665935074166]
We introduce the concept of 'weight saliency' for machine unlearning, drawing parallels with input saliency in model explanation.
The resultant method that we call saliency unlearning (SalUn) narrows the performance gap with 'exact' unlearning.
SalUn is the first principled MU approach that can effectively erase the influence of forgetting data, classes, or concepts in both image classification and generation tasks.
arXiv Detail & Related papers (2023-10-19T06:17:17Z) - Generative Adversarial Networks Unlearning [13.342749941357152]
Machine unlearning has emerged as a solution to erase training data from trained machine learning models.
Research on Generative Adversarial Networks (GANs) is limited due to their unique architecture, including a generator and a discriminator.
We propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.
arXiv Detail & Related papers (2023-08-19T02:21:21Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.