Beyond ImageNet Attack: Towards Crafting Adversarial Examples for
Black-box Domains
- URL: http://arxiv.org/abs/2201.11528v2
- Date: Sat, 29 Jan 2022 02:41:52 GMT
- Title: Beyond ImageNet Attack: Towards Crafting Adversarial Examples for
Black-box Domains
- Authors: Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao,
Yuan He and Hui Xue
- Abstract summary: Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.
We propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains.
Our methods outperform state-of-the-art approaches by up to 7.71% (towards coarse-grained domains) and 25.91% (towards fine-grained domains) on average.
- Score: 80.11169390071869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples have posed a severe threat to deep neural networks due
to their transferable nature. Currently, various works have paid great efforts
to enhance the cross-model transferability, which mostly assume the substitute
model is trained in the same domain as the target model. However, in reality,
the relevant information of the deployed model is unlikely to leak. Hence, it
is vital to build a more practical black-box threat model to overcome this
limitation and evaluate the vulnerability of deployed models. In this paper,
with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet
Attack (BIA) to investigate the transferability towards black-box domains
(unknown classification tasks). Specifically, we leverage a generative model to
learn the adversarial function for disrupting low-level features of input
images. Based on this framework, we further propose two variants to narrow the
gap between the source and target domains from the data and model perspectives,
respectively. Extensive experiments on coarse-grained and fine-grained domains
demonstrate the effectiveness of our proposed methods. Notably, our methods
outperform state-of-the-art approaches by up to 7.71\% (towards coarse-grained
domains) and 25.91\% (towards fine-grained domains) on average. Our code is
available at \url{https://github.com/qilong-zhang/Beyond-ImageNet-Attack}.
Related papers
- Domain Bridge: Generative model-based domain forensic for black-box
models [20.84645356097581]
We introduce an enhanced approach to determine not just the general data domain but also its specific attributes.
Our approach uses an image embedding model as the encoder and a generative model as the decoder.
A key strength of our approach lies in leveraging the expansive dataset, LAION-5B, on which the generative model Stable Diffusion is trained.
arXiv Detail & Related papers (2024-02-07T07:57:43Z) - Towards Understanding and Boosting Adversarial Transferability from a
Distribution Perspective [80.02256726279451]
adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years.
We propose a novel method that crafts adversarial examples by manipulating the distribution of the image.
Our method can significantly improve the transferability of the crafted attacks and achieves state-of-the-art performance in both untargeted and targeted scenarios.
arXiv Detail & Related papers (2022-10-09T09:58:51Z) - RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation [80.03883315743715]
Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
arXiv Detail & Related papers (2022-08-22T18:18:47Z) - Frequency Domain Model Augmentation for Adversarial Attack [91.36850162147678]
For black-box attacks, the gap between the substitute model and the victim model is usually large.
We propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models.
arXiv Detail & Related papers (2022-07-12T08:26:21Z) - PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency
Training [4.336877104987131]
Unsupervised domain adaptation is a promising technique for semantic segmentation.
We present a novel framework for unsupervised domain adaptation based on the notion of target-domain consistency training.
Our approach is simpler, easier to implement, and more memory-efficient during training.
arXiv Detail & Related papers (2021-05-17T19:36:28Z) - Few-shot Image Generation via Cross-domain Correspondence [98.2263458153041]
Training generative models, such as GANs, on a target domain containing limited examples can easily result in overfitting.
In this work, we seek to utilize a large source domain for pretraining and transfer the diversity information from source to target.
To further reduce overfitting, we present an anchor-based strategy to encourage different levels of realism over different regions in the latent space.
arXiv Detail & Related papers (2021-04-13T17:59:35Z) - RobustNet: Improving Domain Generalization in Urban-Scene Segmentation
via Instance Selective Whitening [40.98892593362837]
Enhancing generalization capability of deep neural networks to unseen domains is crucial for safety-critical applications in the real world such as autonomous driving.
This paper proposes a novel instance selective whitening loss to improve the robustness of the segmentation networks for unseen domains.
arXiv Detail & Related papers (2021-03-29T13:19:37Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.