Object-Attentional Untargeted Adversarial Attack
- URL: http://arxiv.org/abs/2210.08472v1
- Date: Sun, 16 Oct 2022 07:45:13 GMT
- Title: Object-Attentional Untargeted Adversarial Attack
- Authors: Chao Zhou, Yuan-Gen Wang, Guopu Zhu
- Abstract summary: We propose an object-attentional adversarial attack method for untargeted attack.
Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection region from HVPNet.
Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA)
- Score: 11.800889173823945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are facing severe threats from adversarial attacks. Most
existing black-box attacks fool target model by generating either global
perturbations or local patches. However, both global perturbations and local
patches easily cause annoying visual artifacts in adversarial example. Compared
with some smooth regions of an image, the object region generally has more
edges and a more complex texture. Thus small perturbations on it will be more
imperceptible. On the other hand, the object region is undoubtfully the
decisive part of an image to classification tasks. Motivated by these two
facts, we propose an object-attentional adversarial attack method for
untargeted attack. Specifically, we first generate an object region by
intersecting the object detection region from YOLOv4 with the salient object
detection (SOD) region from HVPNet. Furthermore, we design an activation
strategy to avoid the reaction caused by the incomplete SOD. Then, we perform
an adversarial attack only on the detected object region by leveraging Simple
Black-box Adversarial Attack (SimBA). To verify the proposed method, we create
a unique dataset by extracting all the images containing the object defined by
COCO from ImageNet-1K, named COCO-Reduced-ImageNet in this paper. Experimental
results on ImageNet-1K and COCO-Reduced-ImageNet show that under various system
settings, our method yields the adversarial example with better perceptual
quality meanwhile saving the query budget up to 24.16\% compared to the
state-of-the-art approaches including SimBA.
Related papers
- ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - Rethinking the Localization in Weakly Supervised Object Localization [51.29084037301646]
Weakly supervised object localization (WSOL) is one of the most popular and challenging tasks in computer vision.
Recent dividing WSOL into two parts (class-agnostic object localization and object classification) has become the state-of-the-art pipeline for this task.
We propose to replace SCR with a binary-class detector (BCD) for localizing multiple objects, where the detector is trained by discriminating the foreground and background.
arXiv Detail & Related papers (2023-08-11T14:38:51Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Leveraging Local Patch Differences in Multi-Object Scenes for Generative
Adversarial Attacks [48.66027897216473]
We tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images.
We propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes.
Our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.
arXiv Detail & Related papers (2022-09-20T17:36:32Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Structure-Preserving Progressive Low-rank Image Completion for Defending
Adversarial Attacks [20.700098449823024]
Deep neural networks recognize objects by analyzing local image details and summarizing their information along the inference layers to derive the final decision.
Small sophisticated noise in the input images can accumulate along the network inference path and produce wrong decisions at the network output.
Human eyes recognize objects based on their global structure and semantic cues, instead of local image textures.
arXiv Detail & Related papers (2021-03-04T01:24:15Z) - PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack [37.15301296824337]
We propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA.
PICA is more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
arXiv Detail & Related papers (2021-01-19T09:53:52Z) - Watch out! Motion is Blurring the Vision of Your Deep Neural Networks [34.51270823371404]
State-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations.
We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples.
A comprehensive evaluation on the NeurIPS'17 adversarial competition dataset demonstrates the effectiveness of ABBA.
arXiv Detail & Related papers (2020-02-10T02:33:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.