Transferable Adversarial Attack on Image Tampering Localization
- URL: http://arxiv.org/abs/2309.10243v1
- Date: Tue, 19 Sep 2023 01:48:01 GMT
- Title: Transferable Adversarial Attack on Image Tampering Localization
- Authors: Yuqi Wang, Gang Cao, Zijie Lou, Haochen Zhu
- Abstract summary: We propose an adversarial attack scheme to reveal the reliability of such tampering localizers.
A black-box attack is achieved by relying on the transferability of such adversarial examples to different localizers.
- Score: 7.177637468324888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is significant to evaluate the security of existing digital image
tampering localization algorithms in real-world applications. In this paper, we
propose an adversarial attack scheme to reveal the reliability of such
tampering localizers, which would be fooled and fail to predict altered regions
correctly. Specifically, the adversarial examples based on optimization and
gradient are implemented for white/black-box attacks. Correspondingly, the
adversarial example is optimized via reverse gradient propagation, and the
perturbation is added adaptively in the direction of gradient rising. The
black-box attack is achieved by relying on the transferability of such
adversarial examples to different localizers. Extensive evaluations verify that
the proposed attack sharply reduces the localization accuracy while preserving
high visual quality of the attacked images.
Related papers
- Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks [18.05924632169541]
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM)
Specifically, we use data rescaling to substitute the sign function without extra computational cost.
Our method could significantly boost the transferability of gradient-based attacks and outperform the state-of-the-art baselines.
arXiv Detail & Related papers (2023-07-06T07:52:42Z) - Improving Adversarial Transferability with Scheduled Step Size and Dual
Example [33.00528131208799]
We show that transferability of adversarial examples generated by the iterative fast gradient sign method exhibits a decreasing trend when increasing the number of iterations.
We propose a novel strategy, which uses the Scheduled step size and the Dual example (SD) to fully utilize the adversarial information near the benign sample.
Our proposed strategy can be easily integrated with existing adversarial attack methods for better adversarial transferability.
arXiv Detail & Related papers (2023-01-30T15:13:46Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - A Perceptual Distortion Reduction Framework for Adversarial Perturbation
Generation [58.6157191438473]
We propose a perceptual distortion reduction framework to tackle this problem from two perspectives.
We propose a perceptual distortion constraint and add it into the objective function of adversarial attack to jointly optimize the perceptual distortions and attack success rate.
arXiv Detail & Related papers (2021-05-01T15:08:10Z) - Gradient-based Adversarial Attacks against Text Transformers [96.73493433809419]
We propose the first general-purpose gradient-based attack against transformer models.
We empirically demonstrate that our white-box attack attains state-of-the-art attack performance on a variety of natural language tasks.
arXiv Detail & Related papers (2021-04-15T17:43:43Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Random Transformation of Image Brightness for Adversarial Attack [5.405413975396116]
adversarial examples are crafted by adding small, human-imperceptibles to the original images.
Deep neural networks are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptibles to the original images.
We propose an adversarial example generation method based on this phenomenon, which can be integrated with Fast Gradient Sign Method.
Our method has a higher success rate for black-box attacks than other attack methods based on data augmentation.
arXiv Detail & Related papers (2021-01-12T07:00:04Z) - Local Black-box Adversarial Attacks: A Query Efficient Approach [64.98246858117476]
Adrial attacks have threatened the application of deep neural networks in security-sensitive scenarios.
We propose a novel framework to perturb the discriminative areas of clean examples only within limited queries in black-box attacks.
We conduct extensive experiments to show that our framework can significantly improve the query efficiency during black-box perturbing with a high attack success rate.
arXiv Detail & Related papers (2021-01-04T15:32:16Z) - Perception Improvement for Free: Exploring Imperceptible Black-box
Adversarial Attacks on Image Classification [27.23874129994179]
White-box adversarial attacks can fool neural networks with small perturbations, especially for large size images.
Keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks.
We propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models.
arXiv Detail & Related papers (2020-10-30T07:17:12Z) - Detecting Patch Adversarial Attacks with Image Residuals [9.169947558498535]
A discriminator is trained to distinguish between clean and adversarial samples.
We show that the obtained residuals act as a digital fingerprint for adversarial attacks.
Results show that the proposed detection method generalizes to previously unseen, stronger attacks.
arXiv Detail & Related papers (2020-02-28T01:28:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.