Stable Unlearnable Example: Enhancing the Robustness of Unlearnable
Examples via Stable Error-Minimizing Noise
- URL: http://arxiv.org/abs/2311.13091v2
- Date: Sun, 31 Dec 2023 20:04:25 GMT
- Title: Stable Unlearnable Example: Enhancing the Robustness of Unlearnable
Examples via Stable Error-Minimizing Noise
- Authors: Yixin Liu, Kaidi Xu, Xun Chen, and Lichao Sun
- Abstract summary: Unlearnable example is proposed to significantly degrade the generalization performance of models by adding a kind of imperceptible noise to the data.
We introduce stable error-minimizing noise (SEM), which trains the defensive noise against random perturbation instead of the time-consuming adversarial perturbation.
SEM achieves a new state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet Subset.
- Score: 31.586389548657205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The open source of large amounts of image data promotes the development of
deep learning techniques. Along with this comes the privacy risk of these
open-source image datasets being exploited by unauthorized third parties to
train deep learning models for commercial or illegal purposes. To avoid the
abuse of public data, a poisoning-based technique, the unlearnable example, is
proposed to significantly degrade the generalization performance of models by
adding a kind of imperceptible noise to the data. To further enhance its
robustness against adversarial training, existing works leverage iterative
adversarial training on both the defensive noise and the surrogate model.
However, it still remains unknown whether the robustness of unlearnable
examples primarily comes from the effect of enhancement in the surrogate model
or the defensive noise. Observing that simply removing the adversarial noise on
the training process of the defensive noise can improve the performance of
robust unlearnable examples, we identify that solely the surrogate model's
robustness contributes to the performance. Furthermore, we found a negative
correlation exists between the robustness of defensive noise and the protection
performance, indicating defensive noise's instability issue. Motivated by this,
to further boost the robust unlearnable example, we introduce stable
error-minimizing noise (SEM), which trains the defensive noise against random
perturbation instead of the time-consuming adversarial perturbation to improve
the stability of defensive noise. Through extensive experiments, we demonstrate
that SEM achieves a new state-of-the-art performance on CIFAR-10, CIFAR-100,
and ImageNet Subset in terms of both effectiveness and efficiency. The code is
available at https://github.com/liuyixin-louis/Stable-Unlearnable-Example.
Related papers
- Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified Robustness [56.2479170374811]
We introduce Fine-Tuning with Confidence-Aware Denoised Image Selection (FT-CADIS)
FT-CADIS is inspired by the observation that the confidence of off-the-shelf classifiers can effectively identify hallucinated images during denoised smoothing.
It has established the state-of-the-art certified robustness among denoised smoothing methods across all $ell$-adversary radius in various benchmarks.
arXiv Detail & Related papers (2024-11-13T09:13:20Z) - Robust VAEs via Generating Process of Noise Augmented Data [9.366139389037489]
This paper introduces a novel framework that enhances robustness by regularizing the latent space divergence between original and noise-augmented data.
Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs.
arXiv Detail & Related papers (2024-07-26T09:55:34Z) - Semantic Deep Hiding for Robust Unlearnable Examples [33.68037533119807]
Unlearnable examples are proposed to mislead the deep learning models and prevent data from unauthorized exploration.
We propose a Deep Hiding scheme that adaptively hides semantic images enriched with high-level features.
Our proposed method exhibits outstanding robustness for unlearnable examples, demonstrating its efficacy in preventing unauthorized data exploitation.
arXiv Detail & Related papers (2024-06-25T08:05:42Z) - Advancing the Robustness of Large Language Models through Self-Denoised Smoothing [50.54276872204319]
Large language models (LLMs) have achieved significant success, but their vulnerability to adversarial perturbations has raised considerable concerns.
We propose to leverage the multitasking nature of LLMs to first denoise the noisy inputs and then to make predictions based on these denoised versions.
Unlike previous denoised smoothing techniques in computer vision, which require training a separate model to enhance the robustness of LLMs, our method offers significantly better efficiency and flexibility.
arXiv Detail & Related papers (2024-04-18T15:47:00Z) - May the Noise be with you: Adversarial Training without Adversarial
Examples [3.4673556247932225]
We investigate the question: Can we obtain adversarially-trained models without training on adversarial?
Our proposed approach incorporates inherentity by embedding Gaussian noise within the layers of the NN model at training time.
Our work contributes adversarially trained networks using a completely different approach, with empirically similar robustness to adversarial training.
arXiv Detail & Related papers (2023-12-12T08:22:28Z) - Evaluating Similitude and Robustness of Deep Image Denoising Models via
Adversarial Attack [60.40356882897116]
Deep neural networks (DNNs) have shown superior performance compared to traditional image denoising algorithms.
In this paper, we propose an adversarial attack method named denoising-PGD which can successfully attack all the current deep denoising models.
arXiv Detail & Related papers (2023-06-28T09:30:59Z) - Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial
Defense [52.66971714830943]
Masked image modeling (MIM) has made it a prevailing framework for self-supervised visual representation learning.
In this paper, we investigate how this powerful self-supervised learning paradigm can provide adversarial robustness to downstream classifiers.
We propose an adversarial defense method, referred to as De3, by exploiting the pretrained decoder for denoising.
arXiv Detail & Related papers (2023-02-02T12:37:24Z) - Towards Adversarially Robust Deep Image Denoising [199.2458715635285]
This work systematically investigates the adversarial robustness of deep image denoisers (DIDs)
We propose a novel adversarial attack, namely Observation-based Zero-mean Attack (sc ObsAtk) to craft adversarial zero-mean perturbations on given noisy images.
To robustify DIDs, we propose hybrid adversarial training (sc HAT) that jointly trains DIDs with adversarial and non-adversarial noisy data.
arXiv Detail & Related papers (2022-01-12T10:23:14Z) - Learning to Generate Noise for Multi-Attack Robustness [126.23656251512762]
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.
In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system.
We propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks.
arXiv Detail & Related papers (2020-06-22T10:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.