AdvHaze: Adversarial Haze Attack
- URL: http://arxiv.org/abs/2104.13673v1
- Date: Wed, 28 Apr 2021 09:52:25 GMT
- Title: AdvHaze: Adversarial Haze Attack
- Authors: Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Wei Feng
- Abstract summary: We introduce a novel adversarial attack method based on haze, which is a common phenomenon in real-world scenery.
Our method can synthesize potentially adversarial haze into an image based on the atmospheric scattering model with high realisticity.
We demonstrate that the proposed method achieves a high success rate, and holds better transferability across different classification models than the baselines.
- Score: 19.744435173861785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, adversarial attacks have drawn more attention for their
value on evaluating and improving the robustness of machine learning models,
especially, neural network models. However, previous attack methods have mainly
focused on applying some $l^p$ norm-bounded noise perturbations. In this paper,
we instead introduce a novel adversarial attack method based on haze, which is
a common phenomenon in real-world scenery. Our method can synthesize
potentially adversarial haze into an image based on the atmospheric scattering
model with high realisticity and mislead classifiers to predict an incorrect
class. We launch experiments on two popular datasets, i.e., ImageNet and
NIPS~2017. We demonstrate that the proposed method achieves a high success
rate, and holds better transferability across different classification models
than the baselines. We also visualize the correlation matrices, which inspire
us to jointly apply different perturbations to improve the success rate of the
attack. We hope this work can boost the development of non-noise-based
adversarial attacks and help evaluate and improve the robustness of DNNs.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - LFAA: Crafting Transferable Targeted Adversarial Examples with
Low-Frequency Perturbations [25.929492841042666]
We present a novel approach to generate transferable targeted adversarial examples.
We exploit the vulnerability of deep neural networks to perturbations on high-frequency components of images.
Our proposed approach significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T04:54:55Z) - Practical No-box Adversarial Attacks with Training-free Hybrid Image
Transformation [123.33816363589506]
We show the existence of a textbftraining-free adversarial perturbation under the no-box threat model.
Motivated by our observation that high-frequency component (HFC) domains in low-level features, we attack an image mainly by manipulating its frequency components.
Our method is even competitive to mainstream transfer-based black-box attacks.
arXiv Detail & Related papers (2022-03-09T09:51:00Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - GAP++: Learning to generate target-conditioned adversarial examples [28.894143619182426]
Adversarial examples are perturbed inputs which can cause a serious threat for machine learning models.
We propose a more general-purpose framework which infers target-conditioned perturbations dependent on both input image and target label.
Our method achieves superior performance with single target attack models and obtains high fooling rates with small perturbation norms.
arXiv Detail & Related papers (2020-06-09T07:49:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.