On Procedural Adversarial Noise Attack And Defense
- URL: http://arxiv.org/abs/2108.04409v1
- Date: Tue, 10 Aug 2021 02:47:01 GMT
- Title: On Procedural Adversarial Noise Attack And Defense
- Authors: Jun Yan and Xiaoyang Deng and Huilin Yin and Wancheng Ge
- Abstract summary: adversarial examples would inveigle neural networks to make prediction errors with small per- turbations on the input images.
In this paper, we propose two universal adversarial perturbation (UAP) generation methods based on procedural noise functions.
Without changing the semantic representations, the adversarial examples generated via our methods show superior performance on the attack.
- Score: 2.5388455804357952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) are vulnerable to adversarial examples which
would inveigle neural networks to make prediction errors with small per-
turbations on the input images. Researchers have been devoted to promoting the
research on the universal adversarial perturbations (UAPs) which are
gradient-free and have little prior knowledge on data distributions. Procedural
adversarial noise at- tack is a data-free universal perturbation generation
method. In this paper, we propose two universal adversarial perturbation (UAP)
generation methods based on procedural noise functions: Simplex noise and
Worley noise. In our framework, the shading which disturbs visual
classification is generated with rendering technology. Without changing the
semantic representations, the adversarial examples generated via our methods
show superior performance on the attack.
Related papers
- Universal Adversarial Defense in Remote Sensing Based on Pre-trained Denoising Diffusion Models [17.283914361697818]
Deep neural networks (DNNs) have risen to prominence as key solutions in numerous AI applications for earth observation (AI4EO)
This paper presents a novel Universal Adversarial Defense approach in Remote Sensing Imagery (UAD-RS)
arXiv Detail & Related papers (2023-07-31T17:21:23Z) - NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial
Attacks [21.86821880164293]
adversarial attacks can easily mislead a neural network and lead to wrong decisions.
In this paper, we use the gradient class activation map (GradCAM) to analyze the behavior deviation of the VGG-16 network.
We also propose a novel NoiseCAM algorithm that integrates information from globally and pixel-level weighted class activation maps.
arXiv Detail & Related papers (2023-03-09T22:07:41Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Perlin Noise Improve Adversarial Robustness [9.084544535198509]
Adversarial examples are some special input that can perturb the output of a deep neural network.
Most of the present methods for generating adversarial examples require gradient information.
Procedural noise adversarial examples is a new way of adversarial example generation.
arXiv Detail & Related papers (2021-12-26T15:58:28Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Removing Adversarial Noise in Class Activation Feature Space [160.78488162713498]
We propose to remove adversarial noise by implementing a self-supervised adversarial training mechanism in a class activation feature space.
We train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
Empirical evaluations demonstrate that our method could significantly enhance adversarial robustness in comparison to previous state-of-the-art approaches.
arXiv Detail & Related papers (2021-04-19T10:42:24Z) - Improving Transformation-based Defenses against Adversarial Examples
with First-order Perturbations [16.346349209014182]
Studies show that neural networks are susceptible to adversarial attacks.
This exposes a potential threat to neural network-based intelligent systems.
We propose a method for counteracting adversarial perturbations to improve adversarial robustness.
arXiv Detail & Related papers (2021-03-08T06:27:24Z) - Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations [83.60161052867534]
We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
arXiv Detail & Related papers (2020-07-13T05:00:09Z) - On the Matrix-Free Generation of Adversarial Perturbations for Black-Box
Attacks [1.199955563466263]
In this paper, we propose a practical generation method of such adversarial perturbation to be applied to black-box attacks.
The attackers generate such perturbation without invoking inner functions and/or accessing the inner states of a deep neural network.
arXiv Detail & Related papers (2020-02-18T00:50:26Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.