Efficient Federated Unlearning with Adaptive Differential Privacy Preservation
- URL: http://arxiv.org/abs/2411.11044v1
- Date: Sun, 17 Nov 2024 11:45:15 GMT
- Title: Efficient Federated Unlearning with Adaptive Differential Privacy Preservation
- Authors: Yu Jiang, Xindi Tong, Ziyao Liu, Huanyi Ye, Chee Wei Tan, Kwok-Yan Lam,
- Abstract summary: Federated unlearning (FU) offers a promising solution to erase the impact of specific clients' data on the global model in federated learning (FL)
Current state-of-the-art FU methods extend traditional FL frameworks by leveraging stored historical updates.
We propose FedADP, a method designed to achieve both efficiency and privacy preservation in FU.
- Score: 15.8083997286637
- License:
- Abstract: Federated unlearning (FU) offers a promising solution to effectively address the need to erase the impact of specific clients' data on the global model in federated learning (FL), thereby granting individuals the ``Right to be Forgotten". The most straightforward approach to achieve unlearning is to train the model from scratch, excluding clients who request data removal, but it is resource-intensive. Current state-of-the-art FU methods extend traditional FL frameworks by leveraging stored historical updates, enabling more efficient unlearning than training from scratch. However, the use of stored updates introduces significant privacy risks. Adversaries with access to these updates can potentially reconstruct clients' local data, a well-known vulnerability in the privacy domain. While privacy-enhanced techniques exist, their applications to FU scenarios that balance unlearning efficiency with privacy protection remain underexplored. To address this gap, we propose FedADP, a method designed to achieve both efficiency and privacy preservation in FU. Our approach incorporates an adaptive differential privacy (DP) mechanism, carefully balancing privacy and unlearning performance through a novel budget allocation strategy tailored for FU. FedADP also employs a dual-layered selection process, focusing on global models with significant changes and client updates closely aligned with the global model, reducing storage and communication costs. Additionally, a novel calibration method is introduced to facilitate effective unlearning. Extensive experimental results demonstrate that FedADP effectively manages the trade-off between unlearning efficiency and privacy protection.
Related papers
- FedUHB: Accelerating Federated Unlearning via Polyak Heavy Ball Method [17.720414283108727]
Federated unlearning (FU) has been developed to efficiently eliminate the influence of specific data from the model.
We propose FedUHB, a novel exact unlearning approach that leverages the Polyak heavy ball optimization technique.
Our experiments show that FedUHB not only enhances unlearning efficiency but also preserves robust model performance after unlearning.
arXiv Detail & Related papers (2024-11-17T11:08:49Z) - Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Enhancing Security Using Random Binary Weights in Privacy-Preserving Federated Learning [5.311735227179715]
We propose a novel method for enhancing security in privacy-preserving federated learning using the Vision Transformer.
In federated learning, learning is performed by collecting updated information without collecting raw data from each client.
The effectiveness of the proposed method is confirmed in terms of model performance and resistance to the APRIL (Attention PRIvacy Leakage) restoration attack.
arXiv Detail & Related papers (2024-09-30T06:28:49Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Privacy-Preserving Federated Unlearning with Certified Client Removal [18.36632825624581]
State-of-the-art methods for unlearning use historical data from FL clients, such as gradients or locally trained models.
We propose Starfish, a privacy-preserving federated unlearning scheme using Two-Party Computation (2PC) techniques and shared historical client data between two non-colluding servers.
arXiv Detail & Related papers (2024-04-15T12:27:07Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FRAMU: Attention-based Machine Unlearning using Federated Reinforcement
Learning [16.86560475992975]
We introduce Attention-based Machine Unlearning using Federated Reinforcement Learning (FRAMU)
FRAMU incorporates adaptive learning mechanisms, privacy preservation techniques, and optimization strategies.
Our experiments, conducted on both single-modality and multi-modality datasets, revealed that FRAMU significantly outperformed baseline models.
arXiv Detail & Related papers (2023-09-19T03:13:17Z) - FedPDD: A Privacy-preserving Double Distillation Framework for
Cross-silo Federated Recommendation [4.467445574103374]
Cross-platform recommendation aims to improve recommendation accuracy by gathering heterogeneous features from different platforms.
Such cross-silo collaborations between platforms are restricted by increasingly stringent privacy protection regulations.
We propose a novel privacy-preserving double distillation framework named FedPDD for cross-silo federated recommendation.
arXiv Detail & Related papers (2023-05-09T16:17:04Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.