Forgettable Federated Linear Learning with Certified Data Removal
- URL: http://arxiv.org/abs/2306.02216v1
- Date: Sat, 3 Jun 2023 23:53:57 GMT
- Title: Forgettable Federated Linear Learning with Certified Data Removal
- Authors: Ruinan Jin, Minghui Chen, Qiong Zhang, Xiaoxiao Li
- Abstract summary: Federated learning (FL) is a trending distributed learning framework that enables collaborative model training without data sharing.
In this study, we focus on the FL paradigm that grants clients the right to be forgotten''
- Score: 31.9726861621826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a trending distributed learning framework that
enables collaborative model training without data sharing. Machine learning
models trained on datasets can potentially expose the private information of
the training data, revealing details about individual data records. In this
study, we focus on the FL paradigm that grants clients the ``right to be
forgotten''. The forgettable FL framework should bleach its global model
weights as it has never seen that client and hence does not reveal any
information about the client. To this end, we propose the Forgettable Federated
Linear Learning (2F2L) framework featured with novel training and data removal
strategies. The training pipeline, named Federated linear training, employs
linear approximation on the model parameter space to enable our 2F2L framework
work for deep neural networks while achieving comparable results with canonical
neural network training. We also introduce FedRemoval, an efficient and
effective removal strategy that tackles the computational challenges in FL by
approximating the Hessian matrix using public server data from the pretrained
model. Unlike the previous uncertified and heuristic machine unlearning methods
in FL, we provide theoretical guarantees by bounding the differences of model
weights by our FedRemoval and that from retraining from scratch. Experimental
results on MNIST and Fashion-MNIST datasets demonstrate the effectiveness of
our method in achieving a balance between model accuracy and information
removal, outperforming baseline strategies and approaching retraining from
scratch.
Related papers
- Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience [26.647028483763137]
We introduce Fast-FedUL, a tailored unlearning method for Federated Learning (FL)
We develop an algorithm to systematically remove the impact of the target client from the trained model.
Experimental results indicate that Fast-FedUL effectively removes almost all traces of the target client, while retaining the knowledge of untargeted clients.
arXiv Detail & Related papers (2024-05-28T10:51:38Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural
Networks [12.525959293825318]
We introduce Learn, Unlearn, and Relearn (LURE) an online learning paradigm for deep neural networks (DNNs)
LURE interchanges between the unlearning phase, which selectively forgets the undesirable information in the model, and the relearning phase, which emphasizes learning on generalizable features.
We show that our training paradigm provides consistent performance gains across datasets in both classification and few-shot settings.
arXiv Detail & Related papers (2023-03-18T16:45:54Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Unlearning with Knowledge Distillation [9.666514931140707]
Federated Learning (FL) is designed to protect the data privacy of each client during the training process.
With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client.
We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model.
arXiv Detail & Related papers (2022-01-24T03:56:20Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Federated Unlearning [24.60965999954735]
Federated learning (FL) has emerged as a promising distributed machine learning (ML) paradigm.
Practical needs of the "right to be forgotten" and countering data poisoning attacks call for efficient techniques that can remove, or unlearn, specific training data from the trained FL model.
We present FedEraser, the first federated unlearning methodology that can eliminate the influence of a federated client's data on the global FL model.
arXiv Detail & Related papers (2020-12-27T08:54:37Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.