Perlin Noise Improve Adversarial Robustness
- URL: http://arxiv.org/abs/2112.13408v1
- Date: Sun, 26 Dec 2021 15:58:28 GMT
- Title: Perlin Noise Improve Adversarial Robustness
- Authors: Chengjun Tang, Kun Zhang, Chunfang Xing, Yong Ding, Zengmin Xu
- Abstract summary: Adversarial examples are some special input that can perturb the output of a deep neural network.
Most of the present methods for generating adversarial examples require gradient information.
Procedural noise adversarial examples is a new way of adversarial example generation.
- Score: 9.084544535198509
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Adversarial examples are some special input that can perturb the output of a
deep neural network, in order to make produce intentional errors in the
learning algorithms in the production environment. Most of the present methods
for generating adversarial examples require gradient information. Even
universal perturbations that are not relevant to the generative model rely to
some extent on gradient information. Procedural noise adversarial examples is a
new way of adversarial example generation, which uses computer graphics noise
to generate universal adversarial perturbations quickly while not relying on
gradient information. Combined with the defensive idea of adversarial training,
we use Perlin noise to train the neural network to obtain a model that can
defend against procedural noise adversarial examples. In combination with the
use of model fine-tuning methods based on pre-trained models, we obtain faster
training as well as higher accuracy. Our study shows that procedural noise
adversarial examples are defensible, but why procedural noise can generate
adversarial examples and how to defend against other kinds of procedural noise
adversarial examples that may emerge in the future remain to be investigated.
Related papers
- Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Robust Unlearnable Examples: Protecting Data Against Adversarial
Learning [77.6015932710068]
We propose to make data unlearnable for deep learning models by adding a type of error-minimizing noise.
In this paper, we design new methods to generate robust unlearnable examples that are protected from adversarial training.
Experiments show that the unlearnability brought by robust error-minimizing noise can effectively protect data from adversarial training in various scenarios.
arXiv Detail & Related papers (2022-03-28T07:13:51Z) - On Procedural Adversarial Noise Attack And Defense [2.5388455804357952]
adversarial examples would inveigle neural networks to make prediction errors with small per- turbations on the input images.
In this paper, we propose two universal adversarial perturbation (UAP) generation methods based on procedural noise functions.
Without changing the semantic representations, the adversarial examples generated via our methods show superior performance on the attack.
arXiv Detail & Related papers (2021-08-10T02:47:01Z) - Removing Adversarial Noise in Class Activation Feature Space [160.78488162713498]
We propose to remove adversarial noise by implementing a self-supervised adversarial training mechanism in a class activation feature space.
We train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
Empirical evaluations demonstrate that our method could significantly enhance adversarial robustness in comparison to previous state-of-the-art approaches.
arXiv Detail & Related papers (2021-04-19T10:42:24Z) - Exponentiated Gradient Reweighting for Robust Training Under Label Noise
and Beyond [21.594200327544968]
We present a flexible approach to learning from noisy examples.
Specifically, we treat each training example as an expert and maintain a distribution over all examples.
Unlike other related methods, our approach handles a general class of loss functions and can be applied to a wide range of noise types and applications.
arXiv Detail & Related papers (2021-04-03T22:54:49Z) - Improving Transformation-based Defenses against Adversarial Examples
with First-order Perturbations [16.346349209014182]
Studies show that neural networks are susceptible to adversarial attacks.
This exposes a potential threat to neural network-based intelligent systems.
We propose a method for counteracting adversarial perturbations to improve adversarial robustness.
arXiv Detail & Related papers (2021-03-08T06:27:24Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - Improved Detection of Adversarial Images Using Deep Neural Networks [2.3993545400014873]
Recent studies indicate that machine learning models used for classification tasks are vulnerable to adversarial examples.
We propose a new approach called Feature Map Denoising to detect the adversarial inputs.
We show the performance of detection on a mixed dataset consisting of adversarial examples.
arXiv Detail & Related papers (2020-07-10T19:02:24Z) - How benign is benign overfitting? [96.07549886487526]
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
Deep neural networks essentially achieve zero training error, even in the presence of label noise.
We identify label noise as one of the causes for adversarial vulnerability.
arXiv Detail & Related papers (2020-07-08T11:07:10Z) - Learning to Generate Noise for Multi-Attack Robustness [126.23656251512762]
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.
In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system.
We propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks.
arXiv Detail & Related papers (2020-06-22T10:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.