Towards Imperceptible Universal Attacks on Texture Recognition
- URL: http://arxiv.org/abs/2011.11957v1
- Date: Tue, 24 Nov 2020 08:33:59 GMT
- Title: Towards Imperceptible Universal Attacks on Texture Recognition
- Authors: Yingpeng Deng and Lina J. Karam
- Abstract summary: We show that limiting the perturbation's $l_p$ norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images.
We propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain.
- Score: 19.79803434998116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although deep neural networks (DNNs) have been shown to be susceptible to
image-agnostic adversarial attacks on natural image classification problems,
the effects of such attacks on DNN-based texture recognition have yet to be
explored. As part of our work, we find that limiting the perturbation's $l_p$
norm in the spatial domain may not be a suitable way to restrict the
perceptibility of universal adversarial perturbations for texture images. Based
on the fact that human perception is affected by local visual frequency
characteristics, we propose a frequency-tuned universal attack method to
compute universal perturbations in the frequency domain. Our experiments
indicate that our proposed method can produce less perceptible perturbations
yet with a similar or higher white-box fooling rates on various DNN texture
classifiers and texture datasets as compared to existing universal attack
techniques. We also demonstrate that our approach can improve the attack
robustness against defended models as well as the cross-dataset transferability
for texture recognition problems.
Related papers
- Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Robust Real-World Image Super-Resolution against Adversarial Attacks [115.04009271192211]
adversarial image samples with quasi-imperceptible noises could threaten deep learning SR models.
We propose a robust deep learning framework for real-world SR that randomly erases potential adversarial noises.
Our proposed method is more insensitive to adversarial attacks and presents more stable SR results than existing models and defenses.
arXiv Detail & Related papers (2022-07-31T13:26:33Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity [22.28011382580367]
adversarial attack research reveals the vulnerability of learning-based classifiers against carefully crafted perturbations.
We propose a novel algorithm that attacks semantic similarity on feature representations.
For imperceptibility, we introduce the low-frequency constraint to limit perturbations within high-frequency components.
arXiv Detail & Related papers (2022-03-10T04:46:51Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Stereoscopic Universal Perturbations across Different Architectures and
Datasets [60.021985610201156]
We study the effect of adversarial perturbations of images on deep stereo matching networks for the disparity estimation task.
We present a method to craft a single set of perturbations that, when added to any stereo image pair in a dataset, can fool a stereo network.
Our perturbations can increase D1-error (akin to fooling rate) of state-of-the-art stereo networks from 1% to as much as 87%.
arXiv Detail & Related papers (2021-12-12T02:11:31Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - A Study for Universal Adversarial Attacks on Texture Recognition [19.79803434998116]
We show that there exist small image-agnostic/univesal perturbations that can fool the deep learning models with more than 80% of testing fooling rates on all tested texture datasets.
The computed perturbations using various attack methods on the tested datasets are generally quasi-imperceptible, containing structured patterns with low, middle and high frequency components.
arXiv Detail & Related papers (2020-10-04T08:11:11Z) - Frequency-Tuned Universal Adversarial Attacks [19.79803434998116]
We propose a frequency-tuned universal attack method to compute universal perturbations.
We show that our method can realize a good balance between perceivability and effectiveness in terms of fooling rate.
arXiv Detail & Related papers (2020-03-11T22:52:19Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.