Heterogeneous Decentralized Machine Unlearning with Seed Model
Distillation
- URL: http://arxiv.org/abs/2308.13269v2
- Date: Mon, 28 Aug 2023 08:09:52 GMT
- Title: Heterogeneous Decentralized Machine Unlearning with Seed Model
Distillation
- Authors: Guanhua Ye, Tong Chen, Quoc Viet Hung Nguyen, Hongzhi Yin
- Abstract summary: Information security legislation endowed users with unconditional rights to be forgotten by trained machine learning models.
We design a decentralized unlearning framework called HDUS, which uses distilled seed models to construct erasable ensembles for all clients.
- Score: 47.42071293545731
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As some recent information security legislation endowed users with
unconditional rights to be forgotten by any trained machine learning model,
personalized IoT service providers have to put unlearning functionality into
their consideration. The most straightforward method to unlearn users'
contribution is to retrain the model from the initial state, which is not
realistic in high throughput applications with frequent unlearning requests.
Though some machine unlearning frameworks have been proposed to speed up the
retraining process, they fail to match decentralized learning scenarios. In
this paper, we design a decentralized unlearning framework called HDUS, which
uses distilled seed models to construct erasable ensembles for all clients.
Moreover, the framework is compatible with heterogeneous on-device models,
representing stronger scalability in real-world applications. Extensive
experiments on three real-world datasets show that our HDUS achieves
state-of-the-art performance.
Related papers
- Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Exploiting Features and Logits in Heterogeneous Federated Learning [0.2538209532048866]
Federated learning (FL) facilitates the management of edge devices to collaboratively train a shared model.
We propose a novel data-free FL method that supports heterogeneous client models by managing features and logits.
Unlike Felo, the server has a conditional VAE in Velo, which is used for training mid-level features and generating synthetic features according to the labels.
arXiv Detail & Related papers (2022-10-27T15:11:46Z) - Continual-Learning-as-a-Service (CLaaS): On-Demand Efficient Adaptation
of Predictive Models [17.83007940710455]
Two main future trends for companies that want to build machine learning-based applications are real-time inference and continual updating.
This paper defines a novel software service and model delivery infrastructure termed Continual Learning-as-a-Service (CL) to address these issues.
It provides support for model updating and validation tools for data scientists without an on-premise solution and in an efficient, stateful and easy-to-use manner.
arXiv Detail & Related papers (2022-06-14T16:22:54Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Federated Action Recognition on Heterogeneous Embedded Devices [16.88104153104136]
In this work, we enable clients with limited computing power to perform action recognition, a computationally heavy task.
We first perform model compression at the central server through knowledge distillation on a large dataset.
The fine-tuning is required because limited data present in smaller datasets is not adequate for action recognition models to learn complextemporal features.
arXiv Detail & Related papers (2021-07-18T02:33:24Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.