Communication Efficient and Provable Federated Unlearning
- URL: http://arxiv.org/abs/2401.11018v1
- Date: Fri, 19 Jan 2024 20:35:02 GMT
- Title: Communication Efficient and Provable Federated Unlearning
- Authors: Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng, Di
Wang
- Abstract summary: We study federated unlearning, a novel problem to eliminate the impact of specific clients or data points on the global model learned via federated learning (FL)
This problem is driven by the right to be forgotten and the privacy challenges in FL.
We introduce a new framework for exact federated unlearning that meets two essential criteria: textitcommunication efficiency and textitexact unlearning provability.
- Score: 43.178460522012934
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study federated unlearning, a novel problem to eliminate the impact of
specific clients or data points on the global model learned via federated
learning (FL). This problem is driven by the right to be forgotten and the
privacy challenges in FL. We introduce a new framework for exact federated
unlearning that meets two essential criteria: \textit{communication efficiency}
and \textit{exact unlearning provability}. To our knowledge, this is the first
work to tackle both aspects coherently. We start by giving a rigorous
definition of \textit{exact} federated unlearning, which guarantees that the
unlearned model is statistically indistinguishable from the one trained without
the deleted data. We then pinpoint the key property that enables fast exact
federated unlearning: total variation (TV) stability, which measures the
sensitivity of the model parameters to slight changes in the dataset.
Leveraging this insight, we develop a TV-stable FL algorithm called
\texttt{FATS}, which modifies the classical
\texttt{\underline{F}ed\underline{A}vg} algorithm for \underline{T}V
\underline{S}tability and employs local SGD with periodic averaging to lower
the communication round. We also design efficient unlearning algorithms for
\texttt{FATS} under two settings: client-level and sample-level unlearning. We
provide theoretical guarantees for our learning and unlearning algorithms,
proving that they achieve exact federated unlearning with reasonable
convergence rates for both the original and unlearned models. We empirically
validate our framework on 6 benchmark datasets, and show its superiority over
state-of-the-art methods in terms of accuracy, communication cost, computation
cost, and unlearning efficacy.
Related papers
- Federated Unlearning via Active Forgetting [24.060724751342047]
We propose a novel federated unlearning framework based on incremental learning.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - Byzantine-Resilient Federated Learning at Edge [20.742023657098525]
We present a Byzantine-resilient descent algorithm that can handle heavy-tailed data.
We also propose an algorithm that incorporates costs during the learning process.
arXiv Detail & Related papers (2023-03-18T15:14:16Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - sqSGD: Locally Private and Communication Efficient Federated Learning [14.60645909629309]
Federated learning (FL) is a technique that trains machine learning models from decentralized data sources.
We develop a gradient-based learning algorithm called sqSGD that addresses communication efficiency and high-dimensional compatibility.
Experiment results show sqSGD successfully learns large models like LeNet and ResNet with local privacy constraints.
arXiv Detail & Related papers (2022-06-21T17:45:35Z) - Communication-Efficient Robust Federated Learning with Noisy Labels [144.31995882209932]
Federated learning (FL) is a promising privacy-preserving machine learning paradigm over distributed located data.
We propose a learning-based reweighting approach to mitigate the effect of noisy labels in FL.
Our approach has shown superior performance on several real-world datasets compared to various baselines.
arXiv Detail & Related papers (2022-06-11T16:21:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - FedSemi: An Adaptive Federated Semi-Supervised Learning Framework [23.90642104477983]
Federated learning (FL) has emerged as an effective technique to co-training machine learning models without actually sharing data and leaking privacy.
Most existing FL methods focus on the supervised setting and ignore the utilization of unlabeled data.
We propose FedSemi, a novel, adaptive, and general framework, which firstly introduces the consistency regularization into FL using a teacher-student model.
arXiv Detail & Related papers (2020-12-06T15:46:04Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.