Scale-free Photo-realistic Adversarial Pattern Attack
- URL: http://arxiv.org/abs/2208.06222v1
- Date: Fri, 12 Aug 2022 11:25:39 GMT
- Title: Scale-free Photo-realistic Adversarial Pattern Attack
- Authors: Xiangbo Gao, Weicheng Xie, Minmin Liu, Cheng Luo, Qinliang Lin, Linlin
Shen, Keerthy Kusumam, Siyang Song
- Abstract summary: Generative Adversarial Networks (GAN) can partially address this problem by synthesizing a more semantically meaningful texture pattern.
In this paper, we propose a scale-free generation-based attack algorithm that synthesizes semantically meaningful adversarial patterns globally to images with arbitrary scales.
- Score: 20.818415741759512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional pixel-wise image attack algorithms suffer from poor robustness to
defense algorithms, i.e., the attack strength degrades dramatically when
defense algorithms are applied. Although Generative Adversarial Networks (GAN)
can partially address this problem by synthesizing a more semantically
meaningful texture pattern, the main limitation is that existing generators can
only generate images of a specific scale. In this paper, we propose a
scale-free generation-based attack algorithm that synthesizes semantically
meaningful adversarial patterns globally to images with arbitrary scales. Our
generative attack approach consistently outperforms the state-of-the-art
methods on a wide range of attack settings, i.e. the proposed approach largely
degraded the performance of various image classification, object detection, and
instance segmentation algorithms under different advanced defense methods.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - SAIF: Sparse Adversarial and Imperceptible Attack Framework [7.025774823899217]
We propose a novel attack technique called Sparse Adversarial and Interpretable Attack Framework (SAIF)
Specifically, we design imperceptible attacks that contain low-magnitude perturbations at a small number of pixels and leverage these sparse attacks to reveal the vulnerability of classifiers.
SAIF computes highly imperceptible and interpretable adversarial examples, and outperforms state-of-the-art sparse attack methods on the ImageNet dataset.
arXiv Detail & Related papers (2022-12-14T20:28:50Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Preemptive Image Robustification for Protecting Users against
Man-in-the-Middle Adversarial Attacks [16.017328736786922]
A Man-in-the-Middle adversary maliciously intercepts and perturbs images web users upload online.
This type of attack can raise severe ethical concerns on top of simple performance degradation.
We devise a novel bi-level optimization algorithm that finds points in the vicinity of natural images that are robust to adversarial perturbations.
arXiv Detail & Related papers (2021-12-10T16:06:03Z) - Adversarial examples by perturbing high-level features in intermediate
decoder layers [0.0]
Instead of perturbing pixels, we use an encoder-decoder representation of the input image and perturb intermediate layers in the decoder.
Our perturbation possesses semantic meaning, such as a longer beak or green tints.
We show that our method modifies key features such as edges and that defence techniques based on adversarial training are vulnerable to our attacks.
arXiv Detail & Related papers (2021-10-14T07:08:15Z) - Detecting and Segmenting Adversarial Graphics Patterns from Images [0.0]
We formulate the defense against such attacks as an artificial graphics pattern segmentation problem.
We evaluate the efficacy of several segmentation algorithms and, based on observation of their performance, propose a new method tailored to this specific problem.
arXiv Detail & Related papers (2021-08-20T21:54:39Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework
for Refining Arbitrary Dense Adversarial Attacks [21.349059923635515]
adversarial evasion attacks are reported to be susceptible to deep neural network image classifiers.
We propose a probabilistic post-hoc framework that refines given dense attacks by significantly reducing the number of perturbed pixels.
Our framework performs adversarial attacks much faster than existing sparse attacks.
arXiv Detail & Related papers (2020-10-13T02:51:10Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.