FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher
- URL: http://arxiv.org/abs/2408.07587v2
- Date: Sun, 06 Apr 2025 14:53:01 GMT
- Title: FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher
- Authors: Alessio Mora, Lorenzo Valerio, Paolo Bellavista, Andrea Passarella,
- Abstract summary: Federated Learning (FL) systems enable the collaborative training of machine learning models without requiring centralized collection of individual data.<n>We propose FedQUIT, a novel algorithm that uses knowledge distillation to scrub the contribution of the data to forget from an FL global model.
- Score: 4.291269657919828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) systems enable the collaborative training of machine learning models without requiring centralized collection of individual data. FL participants should have the ability to exercise their right to be forgotten, ensuring their past contributions can be removed from the learned model upon request. In this paper, we propose FedQUIT, a novel algorithm that uses knowledge distillation to scrub the contribution of the data to forget from an FL global model while preserving its generalization ability. FedQUIT directly works on client devices that request to leave the federation, and leverages a teacher-student framework. The FL global model acts as the teacher, and the local model works as the student. To induce forgetting, FedQUIT tailors the teacher's output on local data (the data to forget) penalizing the prediction score of the true class. Unlike previous work, our method does not require hardly viable assumptions for cross-device settings, such as storing historical updates of participants or requiring access to proxy datasets. Experimental results on various datasets and model architectures demonstrate that (i) FedQUIT outperforms state-of-the-art competitors in forgetting data, (ii) has the exact computational requirements as a regular FedAvg round, and (iii) reduces the cumulative communication costs by up to 117.6$\times$ compared to retraining from scratch to restore the initial generalization performance after unlearning.
Related papers
- Federated Unlearning Made Practical: Seamless Integration via Negated Pseudo-Gradients [3.12131298354022]
This paper introduces a novel method that leverages negated Pseudo-gradients Updates for Federated Unlearning (PUF)
Our approach only uses standard client model updates, anyway employed during regular FL rounds, and interprets them as pseudo-gradients.
Unlike state-of-the-art mechanisms, PUF seamlessly integrates with FL, incurs no additional computational and communication overhead beyond standard FL rounds, and supports concurrent unlearning requests.
arXiv Detail & Related papers (2025-04-08T09:05:33Z) - Federated Learning with Workload Reduction through Partial Training of Client Models and Entropy-Based Data Selection [3.9981390090442694]
We propose FedFT-EDS, a novel approach that combines Fine-Tuning of partial client models with Entropy-based Data Selection to reduce training workloads on edge devices.
Our experiments show that FedFT-EDS uses only 50% user data while improving the global model performance compared to baseline methods, FedAvg and FedProx.
FedFT-EDS improves client learning efficiency by up to 3 times, using one third of training time on clients to achieve an equivalent performance to the baselines.
arXiv Detail & Related papers (2024-12-30T22:47:32Z) - One-shot Federated Learning via Synthetic Distiller-Distillate Communication [63.89557765137003]
One-shot Federated learning (FL) is a powerful technology facilitating collaborative training of machine learning models in a single round of communication.
We propose FedSD2C, a novel and practical one-shot FL framework designed to address these challenges.
arXiv Detail & Related papers (2024-12-06T17:05:34Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Fed-FSNet: Mitigating Non-I.I.D. Federated Learning via Fuzzy
Synthesizing Network [19.23943687834319]
Federated learning (FL) has emerged as a promising privacy-preserving distributed machine learning framework.
We propose a novel FL training framework, dubbed Fed-FSNet, using a properly designed Fuzzy Synthesizing Network (FSNet) to mitigate the Non-I.I.D. at-the-source issue.
arXiv Detail & Related papers (2022-08-21T18:40:51Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - The Right to be Forgotten in Federated Learning: An Efficient
Realization with Rapid Retraining [22.16510303054159]
We propose a rapid retraining approach to fully erase data samples from a trained FL model.
Our formal convergence and complexity analysis demonstrate that our design can preserve model utility with high efficiency.
arXiv Detail & Related papers (2022-03-14T17:22:40Z) - Private Cross-Silo Federated Learning for Extracting Vaccine Adverse
Event Mentions [0.7349727826230862]
Federated Learning (FL) is a goto distributed training paradigm for users to jointly train a global model without physically sharing their data.
We present a comprehensive empirical analysis of various dimensions of benefits gained with FL based training.
We show that local DP can severely cripple the global model's prediction accuracy, thus dis-incentivizing users from participating in the federation.
arXiv Detail & Related papers (2021-03-12T19:20:33Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Federated Unlearning [24.60965999954735]
Federated learning (FL) has emerged as a promising distributed machine learning (ML) paradigm.
Practical needs of the "right to be forgotten" and countering data poisoning attacks call for efficient techniques that can remove, or unlearn, specific training data from the trained FL model.
We present FedEraser, the first federated unlearning methodology that can eliminate the influence of a federated client's data on the global FL model.
arXiv Detail & Related papers (2020-12-27T08:54:37Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.