Learnability Lock: Authorized Learnability Control Through Adversarial
Invertible Transformations
- URL: http://arxiv.org/abs/2202.03576v1
- Date: Thu, 3 Feb 2022 17:38:11 GMT
- Title: Learnability Lock: Authorized Learnability Control Through Adversarial
Invertible Transformations
- Authors: Weiqi Peng, Jinghui Chen
- Abstract summary: This paper introduces and investigates a new concept called "learnability lock" for controlling the model's learnability on a specific dataset with a special key.
We propose adversarial invertible transformation, that can be viewed as a mapping from image to image, to slightly modify data samples so that they become "unlearnable" by machine learning models with negligible loss of visual features.
This ensures that the learnability can be easily restored with a simple inverse transformation while remaining difficult to be detected or reverse-engineered.
- Score: 9.868558660605993
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Owing much to the revolution of information technology, the recent progress
of deep learning benefits incredibly from the vastly enhanced access to data
available in various digital formats. However, in certain scenarios, people may
not want their data being used for training commercial models and thus studied
how to attack the learnability of deep learning models. Previous works on
learnability attack only consider the goal of preventing unauthorized
exploitation on the specific dataset but not the process of restoring the
learnability for authorized cases. To tackle this issue, this paper introduces
and investigates a new concept called "learnability lock" for controlling the
model's learnability on a specific dataset with a special key. In particular,
we propose adversarial invertible transformation, that can be viewed as a
mapping from image to image, to slightly modify data samples so that they
become "unlearnable" by machine learning models with negligible loss of visual
features. Meanwhile, one can unlock the learnability of the dataset and train
models normally using the corresponding key. The proposed learnability lock
leverages class-wise perturbation that applies a universal transformation
function on data samples of the same label. This ensures that the learnability
can be easily restored with a simple inverse transformation while remaining
difficult to be detected or reverse-engineered. We empirically demonstrate the
success and practicability of our method on visual classification tasks.
Related papers
- Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning [16.809644622465086]
We conduct the first investigation to understand the extent to which machine unlearning can leak the confidential content of unlearned data.
Under the Machine Learning as a Service setting, we propose unlearning inversion attacks that can reveal the feature and label information of an unlearned sample.
The experimental results indicate that the proposed attack can reveal the sensitive information of the unlearned data.
arXiv Detail & Related papers (2024-04-04T06:37:46Z) - Premonition: Using Generative Models to Preempt Future Data Changes in
Continual Learning [63.850451635362425]
Continual learning requires a model to adapt to ongoing changes in the data distribution.
We show that the combination of a large language model and an image generation model can similarly provide useful premonitions.
We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem.
arXiv Detail & Related papers (2024-03-12T06:29:54Z) - Corrective Machine Unlearning [22.342035149807923]
We formalize Corrective Machine Unlearning as the problem of mitigating the impact of data affected by unknown manipulations on a trained model.
We find most existing unlearning methods, including retraining-from-scratch without the deletion set, require most of the manipulated data to be identified for effective corrective unlearning.
One approach, Selective Synaptic Dampening, achieves limited success, unlearning adverse effects with just a small portion of the manipulated samples in our setting.
arXiv Detail & Related papers (2024-02-21T18:54:37Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Robust Machine Learning by Transforming and Augmenting Imperfect
Training Data [6.928276018602774]
This thesis explores several data sensitivities of modern machine learning.
We first discuss how to prevent ML from codifying prior human discrimination measured in the training data.
We then discuss the problem of learning from data containing spurious features, which provide predictive fidelity during training but are unreliable upon deployment.
arXiv Detail & Related papers (2023-12-19T20:49:28Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Gated Self-supervised Learning For Improving Supervised Learning [1.784933900656067]
We propose a novel approach to self-supervised learning for image classification using several localizable augmentations with the combination of the gating method.
Our approach uses flip and shuffle channel augmentations in addition to the rotation, allowing the model to learn rich features from the data.
arXiv Detail & Related papers (2023-01-14T09:32:12Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.