Lethe:Adapter-Augmented Dual-Stream Update for Persistent Knowledge Erasure in Federated Unlearning
- URL: http://arxiv.org/abs/2601.22601v1
- Date: Fri, 30 Jan 2026 05:50:35 GMT
- Title: Lethe:Adapter-Augmented Dual-Stream Update for Persistent Knowledge Erasure in Federated Unlearning
- Authors: Hanwei Tan, Wentai Hu, Ligang He, Yijun Quan,
- Abstract summary: Federated unlearning (FU) aims to erase designated client-level, class-level, or sample-level knowledge from a global model.<n>We identify a critical failure mode, termed Knowledge resurfacing, by revealing that continued training can re-activate unlearned knowledge.<n>We propose Lethe, a novel federated unlearning method that de-correlates knowledge to be unlearned from knowledge to be retained.
- Score: 6.171968410497911
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated unlearning (FU) aims to erase designated client-level, class-level, or sample-level knowledge from a global model. Existing studies commonly assume that the collaboration ends up with the unlearning operation, overlooking the follow-up situation where the federated training continues over the remaining data.We identify a critical failure mode, termed Knowledge resurfacing, by revealing that continued training can re-activate unlearned knowledge and cause the removed influence to resurface in the global model. To address this, we propose Lethe, a novel federated unlearning method that de-correlates knowledge to be unlearned from knowledge to be retained, ensuring persistent erasure during continued training.Lethe follows a Reshape--Rectify--Restore pipeline: a temporary adapter is first trained with gradient ascent on the unlearning data to obtain magnified updates, which is then used as corrective signals to diverge layer-wise rectification on the remaining updates in two streams. Finally, the adapter is removed and a short recovery stage is performed on the retained data. Our experiments show that Lethe supports unlearning in the federated system at all levels in a unified manner and maintains superior persistence (Resurfacing Rate <1% in most cases) even after numerous rounds of follow-up training.
Related papers
- Forget Less, Retain More: A Lightweight Regularizer for Rehearsal-Based Continual Learning [51.07663354001582]
Deep neural networks suffer from catastrophic forgetting, where performance on previous tasks degrades after training on a new task.<n>We present a novel approach to address this challenge, focusing on the intersection of memory-based methods and regularization approaches.<n>We formulate a regularization strategy, termed Information Maximization (IM) regularizer, for memory-based continual learning methods.
arXiv Detail & Related papers (2025-12-01T15:56:00Z) - Retrofit: Continual Learning with Bounded Forgetting for Security Applications [25.185616916987158]
We propose RETROFIT, a data retrospective-free continual learning method that achieves bounded forgetting for effective knowledge transfer.<n>To mitigate interference, we apply low-rank and sparse updates that confine parameter changes to independent subspaces.<n>In malware detection under temporal drift, it substantially improves the retention score, from 20.2% to 38.6% over CL baselines, and exceeds the oracle upper bound on new data.
arXiv Detail & Related papers (2025-11-14T16:07:03Z) - Weight Factorization and Centralization for Continual Learning in Speech Recognition [55.63455095283984]
Continually training the models in a rehearsal-free, multilingual, and language agnostic condition, likely leads to catastrophic forgetting.<n>Inspired by the ability of human brains to learn and consolidate knowledge through the waking-sleeping cycle, we propose a continual learning approach.
arXiv Detail & Related papers (2025-06-19T19:59:24Z) - Unlearning through Knowledge Overwriting: Reversible Federated Unlearning via Selective Sparse Adapter [35.65566527544619]
Federated learning is a promising paradigm for privacy-preserving collaborative model training.<n>We propose FUSED, which first identifies critical layers by analyzing each layer's sensitivity to knowledge.<n> adapters are trained without altering the original parameters, overwriting the unlearning knowledge with the remaining knowledge.
arXiv Detail & Related papers (2025-02-28T04:35:26Z) - AdaER: An Adaptive Experience Replay Approach for Continual Lifelong
Learning [16.457330925212606]
We present adaptive-experience replay (AdaER) to address the challenge of continual lifelong learning.
AdaER consists of two stages: memory replay and memory update.
Results: AdaER outperforms existing continual lifelong learning baselines.
arXiv Detail & Related papers (2023-08-07T01:25:45Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.