SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization
- URL: http://arxiv.org/abs/2211.11656v5
- Date: Sat, 16 Mar 2024 00:47:57 GMT
- Title: SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization
- Authors: Yann Fraboni, Martin Van Waerebeke, Kevin Scaman, Richard Vidal, Laetitia Kameni, Marco Lorenzi,
- Abstract summary: Machine Unlearning (MU) aims at removing the contribution of a given data point from a training procedure.
While Federated Unlearning (FU) methods proposed, we propose SIFU (Sequential Informed Unlearning) as a new method.
- Score: 23.064896326146386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. While several FU methods have been proposed, we currently lack a general approach providing formal unlearning guarantees to the FedAvg routine, while ensuring scalability and generalization beyond the convex assumption on the clients' loss functions. We aim at filling this gap by proposing SIFU (Sequential Informed Federated Unlearning), a new FU method applying to both convex and non-convex optimization regimes. SIFU naturally applies to FedAvg without additional computational cost for the clients and provides formal guarantees on the quality of the unlearning task. We provide a theoretical analysis of the unlearning properties of SIFU, and practically demonstrate its effectiveness as compared to a panel of unlearning methods from the state-of-the-art.
Related papers
- FPPL: An Efficient and Non-IID Robust Federated Continual Learning Framework [6.446904116575293]
Federated continual learning (FCL) aims to learn from sequential data stream in the decentralized federated learning setting.
Existing FCL methods usually employ typical rehearsal mechanisms, which could result in privacy violations or additional onerous storage and computational burdens.
In this work, an efficient and non-IID robust federated continual learning framework, called Federated Prototype-Augmented Prompt Learning (FPPL), is proposed.
arXiv Detail & Related papers (2024-11-04T09:15:21Z) - Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability [23.466997173249034]
FedAPM includes novel structures that (i) for missed computations due to unavailability with only $(1)O$ additional memory computation with respect to standard FedAvg.
We show that FedAPM converges to a stationary point even non-stationary algorithm despite being non-stationary dynamics.
arXiv Detail & Related papers (2024-09-26T00:38:18Z) - Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [79.28821338925947]
Domain-Class Incremental Learning is a realistic but challenging continual learning scenario.
To handle these diverse tasks, pre-trained Vision-Language Models (VLMs) are introduced for their strong generalizability.
This incurs a new problem: the knowledge encoded in the pre-trained VLMs may be disturbed when adapting to new tasks, compromising their inherent zero-shot ability.
Existing methods tackle it by tuning VLMs with knowledge distillation on extra datasets, which demands heavy overhead.
We propose the Distribution-aware Interference-free Knowledge Integration (DIKI) framework, retaining pre-trained knowledge of
arXiv Detail & Related papers (2024-07-07T12:19:37Z) - Unlearning during Learning: An Efficient Federated Machine Unlearning Method [20.82138206063572]
Federated Learning (FL) has garnered significant attention as a distributed machine learning paradigm.
To facilitate the implementation of the right to be forgotten, the concept of federated machine unlearning (FMU) has also emerged.
We introduce FedAU, an innovative and efficient FMU framework aimed at overcoming these limitations.
arXiv Detail & Related papers (2024-05-24T11:53:13Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Balanced Self-Paced Learning for AUC Maximization [88.53174245457268]
Existing self-paced methods are limited to pointwise AUC.
Our algorithm converges to a stationary point on the basis of closed-form solutions.
arXiv Detail & Related papers (2022-07-08T02:09:32Z) - FedU: A Unified Framework for Federated Multi-Task Learning with
Laplacian Regularization [15.238123204624003]
Federated multi-task learning (FMTL) has emerged as a natural choice to capture the statistical diversity among the clients in federated learning.
To unleash the FMTL beyond statistical diversity, we formulate a new FMTL FedU using Laplacian regularization.
arXiv Detail & Related papers (2021-02-14T13:19:43Z) - FedAUX: Leveraging Unlabeled Auxiliary Data in Federated Learning [14.10627556244287]
Federated Distillation (FD) is a popular novel algorithmic paradigm for Federated Learning.
We propose FedAUX, which drastically improves performance by deriving maximum utility from the unlabeled auxiliary data.
Experiments on large-scale convolutional neural networks and transformer models demonstrate, that the training performance of FedAUX exceeds SOTA FL baseline methods.
arXiv Detail & Related papers (2021-02-04T09:53:53Z) - FedDANE: A Federated Newton-Type Method [49.9423212899788]
Federated learning aims to jointly learn low statistical models over massively distributed datasets.
We propose FedDANE, an optimization that we adapt from DANE, to handle federated learning.
arXiv Detail & Related papers (2020-01-07T07:44:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.