ACU: Analytic Continual Unlearning for Efficient and Exact Forgetting with Privacy Preservation
- URL: http://arxiv.org/abs/2505.12239v1
- Date: Sun, 18 May 2025 05:28:18 GMT
- Title: ACU: Analytic Continual Unlearning for Efficient and Exact Forgetting with Privacy Preservation
- Authors: Jianheng Tang, Huiping Zhuang, Di Fang, Jiaxu Li, Feijiang Han, Yajiang Huang, Kejia Fan, Leye Wang, Zhanxing Zhu, Shanghang Zhang, Houbing Herbert Song, Yunhuai Liu,
- Abstract summary: Continual Unlearning (CU) aims to sequentially forget particular knowledge acquired during the Continual Learning phase.<n>Most existing unlearning methods require access to the retained dataset for re-training or fine-tuning.<n>We propose a novel gradient-free method for CU, named Analytic Continual Unlearning (ACU), for efficient and exact forgetting with historical data privacy preservation.
- Score: 39.0731790601695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of artificial intelligence demands that models incrementally update knowledge by Continual Learning (CL) to adapt to open-world environments. To meet privacy and security requirements, Continual Unlearning (CU) emerges as an important problem, aiming to sequentially forget particular knowledge acquired during the CL phase. However, existing unlearning methods primarily focus on single-shot joint forgetting and face significant limitations when applied to CU. First, most existing methods require access to the retained dataset for re-training or fine-tuning, violating the inherent constraint in CL that historical data cannot be revisited. Second, these methods often suffer from a poor trade-off between system efficiency and model fidelity, making them vulnerable to being overwhelmed or degraded by adversaries through deliberately frequent requests. In this paper, we identify that the limitations of existing unlearning methods stem fundamentally from their reliance on gradient-based updates. To bridge the research gap at its root, we propose a novel gradient-free method for CU, named Analytic Continual Unlearning (ACU), for efficient and exact forgetting with historical data privacy preservation. In response to each unlearning request, our ACU recursively derives an analytical (i.e., closed-form) solution in an interpretable manner using the least squares method. Theoretical and experimental evaluations validate the superiority of our ACU on unlearning effectiveness, model fidelity, and system efficiency.
Related papers
- Certified Unlearning for Neural Networks [21.312771223437966]
We address the problem of machine unlearning, where the goal is to remove the influence of specific training data from a model upon request.<n>Existing methods rely on restrictive assumptions or lack formal guarantees.<n>We propose a novel method for certified machine unlearning, leveraging the connection between unlearning and privacy amplification by post-processing.
arXiv Detail & Related papers (2025-06-08T03:55:28Z) - AnalyticKWS: Towards Exemplar-Free Analytic Class Incremental Learning for Small-footprint Keyword Spotting [29.303650401396997]
Keywords spotting (KWS) offers a vital mechanism to identify spoken commands in voice-enabled systems.<n>A major problem is catastrophic forgetting, where models lose their ability to recognize earlier keywords.<n>We propose an exemplar-free Analytic Continual Learning (AnalyticKWS) method that updates model parameters without revisiting earlier data.
arXiv Detail & Related papers (2025-05-17T03:55:28Z) - Privacy-Aware Lifelong Learning [14.83033354320841]
The field of machine unlearning focuses on explicitly forgetting certain previous knowledge from pretrained models when requested.<n>We propose a solution, privacy-aware lifelong learning (PALL), involving optimization of task-specific sparseworks with parameter sharing within a single architecture.<n>We empirically demonstrate the scalability of PALL across various architectures in image classification, and provide a state-of-the-art solution.
arXiv Detail & Related papers (2025-05-16T07:27:00Z) - Privacy-Preserved Automated Scoring using Federated Learning for Educational Research [1.2556373621040728]
We propose a federated learning (FL) framework for automated scoring of educational assessments.<n>We benchmark our model against two state-of-the-art FL methods and a centralized learning baseline.<n>Results show that our model achieves the highest accuracy (94.5%) among FL approaches.
arXiv Detail & Related papers (2025-03-12T19:06:25Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.<n>Our approach effectively mitigates Catastrophic Forgetting, outperforming strong Variational CL methods.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.<n>A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.<n>A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.<n>An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Adaptive Retention & Correction: Test-Time Training for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.<n>We name our approach Adaptive Retention & Correction (ARC)<n>ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Unsupervised Continual Learning via Self-Adaptive Deep Clustering
Approach [20.628084936538055]
Knowledge Retention in Self-Adaptive Deep Continual Learner, (KIERA) is proposed in this paper.
KIERA is developed from the notion of flexible deep clustering approach possessing an elastic network structure to cope with changing environments in the timely manner.
arXiv Detail & Related papers (2021-06-28T10:37:14Z) - Conservative Q-Learning for Offline Reinforcement Learning [106.05582605650932]
We show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return.
We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.
arXiv Detail & Related papers (2020-06-08T17:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.