Vertical Federated Unlearning via Backdoor Certification
- URL: http://arxiv.org/abs/2412.11476v1
- Date: Mon, 16 Dec 2024 06:40:25 GMT
- Title: Vertical Federated Unlearning via Backdoor Certification
- Authors: Mengde Han, Tianqing Zhu, Lefeng Zhang, Huan Huo, Wanlei Zhou,
- Abstract summary: VFL offers a novel paradigm in machine learning, enabling distinct entities to train models cooperatively while maintaining data privacy.
Recent privacy regulations emphasize an individual's emphright to be forgotten, which necessitates the ability for models to unlearn specific training data.
We introduce an innovative modification to traditional VFL by employing a mechanism that inverts the typical learning trajectory with the objective of extracting specific data contributions.
- Score: 15.042986414487922
- License:
- Abstract: Vertical Federated Learning (VFL) offers a novel paradigm in machine learning, enabling distinct entities to train models cooperatively while maintaining data privacy. This method is particularly pertinent when entities possess datasets with identical sample identifiers but diverse attributes. Recent privacy regulations emphasize an individual's \emph{right to be forgotten}, which necessitates the ability for models to unlearn specific training data. The primary challenge is to develop a mechanism to eliminate the influence of a specific client from a model without erasing all relevant data from other clients. Our research investigates the removal of a single client's contribution within the VFL framework. We introduce an innovative modification to traditional VFL by employing a mechanism that inverts the typical learning trajectory with the objective of extracting specific data contributions. This approach seeks to optimize model performance using gradient ascent, guided by a pre-defined constrained model. We also introduce a backdoor mechanism to verify the effectiveness of the unlearning procedure. Our method avoids fully accessing the initial training data and avoids storing parameter updates. Empirical evidence shows that the results align closely with those achieved by retraining from scratch. Utilizing gradient ascent, our unlearning approach addresses key challenges in VFL, laying the groundwork for future advancements in this domain. All the code and implementations related to this paper are publicly available at https://github.com/mengde-han/VFL-unlearn.
Related papers
- Federated Unlearning with Gradient Descent and Conflict Mitigation [11.263010875673492]
Federated Unlearning (FU) has been considered a promising way to remove data without full retraining.
We propose Federated Unlearning with Orthogonal Steepest Descent (FedOSD)
arXiv Detail & Related papers (2024-12-28T16:23:10Z) - Identify Backdoored Model in Federated Learning via Individual Unlearning [7.200910949076064]
Backdoor attacks present a significant threat to the robustness of Federated Learning (FL)
We propose MASA, a method that utilizes individual unlearning on local models to identify malicious models in FL.
To the best of our knowledge, this is the first work to leverage machine unlearning to identify malicious models in FL.
arXiv Detail & Related papers (2024-11-01T21:19:47Z) - Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Don't Memorize; Mimic The Past: Federated Class Incremental Learning
Without Episodic Memory [36.4406505365313]
This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data.
The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-07-02T07:06:45Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Unlearning with Knowledge Distillation [9.666514931140707]
Federated Learning (FL) is designed to protect the data privacy of each client during the training process.
With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client.
We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model.
arXiv Detail & Related papers (2022-01-24T03:56:20Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.