Attaining Class-level Forgetting in Pretrained Model using Few Samples
- URL: http://arxiv.org/abs/2210.10670v1
- Date: Wed, 19 Oct 2022 15:36:01 GMT
- Title: Attaining Class-level Forgetting in Pretrained Model using Few Samples
- Authors: Pravendra Singh, Pratik Mazumder, Mohammed Asad Karim
- Abstract summary: In the future, some classes may become restricted due to privacy/ethical concerns.
We propose a novel approach to address this problem without affecting the model's prediction power for the remaining classes.
- Score: 18.251805180282346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to address real-world problems, deep learning models are jointly
trained on many classes. However, in the future, some classes may become
restricted due to privacy/ethical concerns, and the restricted class knowledge
has to be removed from the models that have been trained on them. The available
data may also be limited due to privacy/ethical concerns, and re-training the
model will not be possible. We propose a novel approach to address this problem
without affecting the model's prediction power for the remaining classes. Our
approach identifies the model parameters that are highly relevant to the
restricted classes and removes the knowledge regarding the restricted classes
from them using the limited available training data. Our approach is
significantly faster and performs similar to the model re-trained on the
complete data of the remaining classes.
Related papers
- Fine-Tuning is Fine, if Calibrated [33.42198023647517]
Fine-tuning a pre-trained model is shown to drastically degrade the model's accuracy in the other classes it had previously learned.
This paper systematically dissects the issue, aiming to answer the fundamental question, "What has been damaged in the fine-tuned model?"
We find that the fine-tuned model neither forgets the relationship among the other classes nor degrades the features to recognize these classes.
arXiv Detail & Related papers (2024-09-24T16:35:16Z) - Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning [15.364125432212711]
In current AI era, users may request AI companies to delete their data from the training dataset due to privacy concerns.
Machine unlearning is a new emerged technology to allow model owner to delete requested training data or a class with little affecting on the model performance.
In this paper, we apply the definition of Concept, rather than an image feature or a token of text data, to represent the semantic information of unlearning class.
arXiv Detail & Related papers (2024-05-24T15:59:17Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - SSSE: Efficiently Erasing Samples from Trained Machine Learning Models [103.43466657962242]
We propose an efficient and effective algorithm, SSSE, for samples erasure.
In certain cases SSSE can erase samples almost as well as the optimal, yet impractical, gold standard of training a new model from scratch with only the permitted data.
arXiv Detail & Related papers (2021-07-08T14:17:24Z) - Few-Shot Lifelong Learning [35.05196800623617]
Few-Shot Lifelong Learning enables deep learning models to perform lifelong/continual learning on few-shot data.
Our method selects very few parameters from the model for training every new set of classes instead of training the full model.
We experimentally show that our method significantly outperforms existing methods on the miniImageNet, CIFAR-100, and CUB-200 datasets.
arXiv Detail & Related papers (2021-03-01T13:26:57Z) - Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot
Learning [82.07273754143547]
We propose a meta-continual zero-shot learning (MCZSL) approach to generalizing a model to categories unseen during training.
By pairing self-gating of attributes and scaled class normalization with meta-learning based training, we are able to outperform state-of-the-art results.
arXiv Detail & Related papers (2021-02-23T18:36:14Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Novelty-Prepared Few-Shot Classification [24.42397780877619]
We propose to use a novelty-prepared loss function, called self-compacting softmax loss (SSL), for few-shot classification.
In experiments on CUB-200-2011 and mini-ImageNet datasets, we show that SSL leads to significant improvement of the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-01T14:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.