Practical No-box Adversarial Attacks with Training-free Hybrid Image
Transformation
- URL: http://arxiv.org/abs/2203.04607v1
- Date: Wed, 9 Mar 2022 09:51:00 GMT
- Title: Practical No-box Adversarial Attacks with Training-free Hybrid Image
Transformation
- Authors: Qilong Zhang, Chaoning Zhang, Chaoqun Li, Jingkuan Song, Lianli Gao,
Heng Tao Shen
- Abstract summary: We show the existence of a textbftraining-free adversarial perturbation under the no-box threat model.
Motivated by our observation that high-frequency component (HFC) domains in low-level features, we attack an image mainly by manipulating its frequency components.
Our method is even competitive to mainstream transfer-based black-box attacks.
- Score: 123.33816363589506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the adversarial vulnerability of deep neural networks (DNNs)
has raised increasing attention. Among all the threat models, no-box attacks
are the most practical but extremely challenging since they neither rely on any
knowledge of the target model or similar substitute model, nor access the
dataset for training a new substitute model. Although a recent method has
attempted such an attack in a loose sense, its performance is not good enough
and computational overhead of training is expensive. In this paper, we move a
step forward and show the existence of a \textbf{training-free} adversarial
perturbation under the no-box threat model, which can be successfully used to
attack different DNNs in real-time. Motivated by our observation that
high-frequency component (HFC) domains in low-level features and plays a
crucial role in classification, we attack an image mainly by manipulating its
frequency components. Specifically, the perturbation is manipulated by
suppression of the original HFC and adding of noisy HFC. We empirically and
experimentally analyze the requirements of effective noisy HFC and show that it
should be regionally homogeneous, repeating and dense. Extensive experiments on
the ImageNet dataset demonstrate the effectiveness of our proposed no-box
method. It attacks ten well-known models with a success rate of
\textbf{98.13\%} on average, which outperforms state-of-the-art no-box attacks
by \textbf{29.39\%}. Furthermore, our method is even competitive to mainstream
transfer-based black-box attacks.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Microbial Genetic Algorithm-based Black-box Attack against Interpretable
Deep Learning Systems [16.13790238416691]
In white-box environments, interpretable deep learning systems (IDLSes) have been shown to be vulnerable to malicious manipulations.
We propose a Query-efficient Score-based black-box attack against IDLSes, QuScore, which requires no knowledge of the target model and its coupled interpretation model.
arXiv Detail & Related papers (2023-07-13T00:08:52Z) - Query Efficient Cross-Dataset Transferable Black-Box Attack on Action
Recognition [99.29804193431823]
Black-box adversarial attacks present a realistic threat to action recognition systems.
We propose a new attack on action recognition that addresses these shortcomings by generating perturbations.
Our method achieves 8% and higher 12% deception rates compared to state-of-the-art query-based and transfer-based attacks.
arXiv Detail & Related papers (2022-11-23T17:47:49Z) - Based-CE white-box adversarial attack will not work using super-fitting [10.34121642283309]
Deep Neural Networks (DNN) are widely used in various fields due to their powerful performance.
Recent studies have shown that deep learning models are vulnerable to adversarial attacks.
This paper proposes a new defense method by using the model super-fitting status.
arXiv Detail & Related papers (2022-05-04T09:23:00Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - AdvHaze: Adversarial Haze Attack [19.744435173861785]
We introduce a novel adversarial attack method based on haze, which is a common phenomenon in real-world scenery.
Our method can synthesize potentially adversarial haze into an image based on the atmospheric scattering model with high realisticity.
We demonstrate that the proposed method achieves a high success rate, and holds better transferability across different classification models than the baselines.
arXiv Detail & Related papers (2021-04-28T09:52:25Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Adversarial example generation with AdaBelief Optimizer and Crop
Invariance [8.404340557720436]
Adversarial attacks can be an important method to evaluate and select robust models in safety-critical applications.
We propose AdaBelief Iterative Fast Gradient Method (ABI-FGM) and Crop-Invariant attack Method (CIM) to improve the transferability of adversarial examples.
Our method has higher success rates than state-of-the-art gradient-based attack methods.
arXiv Detail & Related papers (2021-02-07T06:00:36Z) - A Data Augmentation-based Defense Method Against Adversarial Attacks in
Neural Networks [7.943024117353317]
We develop a lightweight defense method that can efficiently invalidate full whitebox adversarial attacks with the compatibility of real-life constraints.
Our model can withstand advanced adaptive attack, namely BPDA with 50 rounds, and still helps the target model maintain an accuracy around 80 %, meanwhile constraining the attack success rate to almost zero.
arXiv Detail & Related papers (2020-07-30T08:06:53Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.