Subspace based Federated Unlearning
- URL: http://arxiv.org/abs/2302.12448v1
- Date: Fri, 24 Feb 2023 04:29:44 GMT
- Title: Subspace based Federated Unlearning
- Authors: Guanghao Li, Li Shen, Yan Sun, Yue Hu, Han Hu, Dacheng Tao
- Abstract summary: Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
- Score: 75.90552823500633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) enables multiple clients to train a machine learning
model collaboratively without exchanging their local data. Federated unlearning
is an inverse FL process that aims to remove a specified target client's
contribution in FL to satisfy the user's right to be forgotten. Most existing
federated unlearning algorithms require the server to store the history of the
parameter updates, which is not applicable in scenarios where the server
storage resource is constrained. In this paper, we propose a
simple-yet-effective subspace based federated unlearning method, dubbed SFU,
that lets the global model perform gradient ascent in the orthogonal space of
input gradient spaces formed by other clients to eliminate the target client's
contribution without requiring additional storage. Specifically, the server
first collects the gradients generated from the target client after performing
gradient ascent, and the input representation matrix is computed locally by the
remaining clients. We also design a differential privacy method to protect the
privacy of the representation matrix. Then the server merges those
representation matrices to get the input gradient subspace and updates the
global model in the orthogonal subspace of the input gradient subspace to
complete the forgetting task with minimal model performance degradation.
Experiments on MNIST, CIFAR10, and CIFAR100 show that SFU outperforms several
state-of-the-art (SOTA) federated unlearning algorithms by a large margin in
various settings.
Related papers
- Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - FedHP: Heterogeneous Federated Learning with Privacy-preserving [0.0]
Federated learning is a distributed machine learning environment, which ensures that clients complete collaborative training without sharing private data, only by exchanging parameters.
We propose a novel federated learning method, which consists of the pre-trained model as the backbone and fully connected layers as the head.
By sharing the embedding vector of classes, instead of parameters based on gradient space, clients can better adapt to private data, and it is more efficient in the communication between the server and clients.
arXiv Detail & Related papers (2023-01-27T13:32:17Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Learning Across Domains and Devices: Style-Driven Source-Free Domain
Adaptation in Clustered Federated Learning [32.098954477227046]
We propose a novel task in which the clients' data is unlabeled and the server accesses a source labeled dataset for pre-training only.
Our experiments show that our algorithm is able to efficiently tackle the new task outperforming existing approaches.
arXiv Detail & Related papers (2022-10-05T15:23:52Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.