Federated Unlearning via Class-Discriminative Pruning
- URL: http://arxiv.org/abs/2110.11794v1
- Date: Fri, 22 Oct 2021 14:01:42 GMT
- Title: Federated Unlearning via Class-Discriminative Pruning
- Authors: Junxiao Wang, Song Guo, Xin Xie, Heng Qi
- Abstract summary: We propose a method for scrubbing the model clean of information about particular categories.
The method does not require retraining from scratch, nor global access to the data used for training.
Channel pruning is followed by a fine-tuning process to recover the performance of the pruned model.
- Score: 16.657364988432317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore the problem of selectively forgetting categories from trained CNN
classification models in the federated learning (FL). Given that the data used
for training cannot be accessed globally in FL, our insights probe deep into
the internal influence of each channel. Through the visualization of feature
maps activated by different channels, we observe that different channels have a
varying contribution to different categories in image classification. Inspired
by this, we propose a method for scrubbing the model clean of information about
particular categories. The method does not require retraining from scratch, nor
global access to the data used for training. Instead, we introduce the concept
of Term Frequency Inverse Document Frequency (TF-IDF) to quantize the class
discrimination of channels. Channels with high TF-IDF scores have more
discrimination on the target categories and thus need to be pruned to unlearn.
The channel pruning is followed by a fine-tuning process to recover the
performance of the pruned model. Evaluated on CIFAR10 dataset, our method
accelerates the speed of unlearning by 8.9x for the ResNet model, and 7.9x for
the VGG model under no degradation in accuracy, compared to retraining from
scratch. For CIFAR100 dataset, the speedups are 9.9x and 8.4x, respectively. We
envision this work as a complementary block for FL towards compliance with
legal and ethical criteria.
Related papers
- Convolutional Channel-wise Competitive Learning for the Forward-Forward
Algorithm [5.1246638322893245]
Forward-Forward (FF) algorithm has been proposed to alleviate the issues of backpropagation (BP) commonly used to train deep neural networks.
We take the main ideas of FF and improve them by leveraging channel-wise competitive learning in the context of convolutional neural networks for image classification tasks.
Our method outperforms recent FF-based models on image classification tasks, achieving testing errors of 0.58%, 7.69%, 21.89%, and 48.77% on MNIST, Fashion-MNIST, CIFAR-10 and CIFAR-100 respectively.
arXiv Detail & Related papers (2023-12-19T23:48:43Z) - No One Left Behind: Real-World Federated Class-Incremental Learning [111.77681016996202]
Local-Global Anti-forgetting (LGA) model addresses local and global catastrophic forgetting.
We develop a category-balanced gradient-adaptive compensation loss and a category gradient-induced semantic distillation loss.
It augments perturbed prototype images of new categories collected from local clients via self-supervised prototype augmentation.
arXiv Detail & Related papers (2023-02-02T06:41:02Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Carrying out CNN Channel Pruning in a White Box [121.97098626458886]
We conduct channel pruning in a white box.
To model the contribution of each channel to differentiating categories, we develop a class-wise mask for each channel.
It is the first time that CNN interpretability theory is considered to guide channel pruning.
arXiv Detail & Related papers (2021-04-24T04:59:03Z) - Adaptive Class Suppression Loss for Long-Tail Object Detection [49.7273558444966]
We devise a novel Adaptive Class Suppression Loss (ACSL) to improve the detection performance of tail categories.
Our ACSL achieves 5.18% and 5.2% improvements with ResNet50-FPN, and sets a new state of the art.
arXiv Detail & Related papers (2021-04-02T05:12:31Z) - CC-Loss: Channel Correlation Loss For Image Classification [35.43152123975516]
The channel correlation loss (CC-Loss) is able to constrain the specific relations between classes and channels.
Two different backbone models trained with the proposed CC-Loss outperform the state-of-the-art loss functions on three image classification datasets.
arXiv Detail & Related papers (2020-10-12T05:59:06Z) - Learning and Exploiting Interclass Visual Correlations for Medical Image
Classification [30.88175218665726]
We present the Class-Correlation Learning Network (CCL-Net) to learn interclass visual correlations from given training data.
Instead of letting the network directly learn the desired correlations, we propose to learn them implicitly via distance metric learning of class-specific embeddings.
An intuitive loss based on a geometrical explanation of correlation is designed for bolstering learning of the interclass correlations.
arXiv Detail & Related papers (2020-07-13T13:31:38Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z) - The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image
Classification [67.79883226015824]
Key for solving fine-grained image categorization is finding discriminate and local regions that correspond to subtle visual traits.
In this paper, we show it is possible to cultivate subtle details without the need for overly complicated network designs or training mechanisms.
The proposed loss function, termed as mutual-channel loss (MC-Loss), consists of two channel-specific components.
arXiv Detail & Related papers (2020-02-11T09:12:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.