To Make Yourself Invisible with Adversarial Semantic Contours
- URL: http://arxiv.org/abs/2303.00284v1
- Date: Wed, 1 Mar 2023 07:22:39 GMT
- Title: To Make Yourself Invisible with Adversarial Semantic Contours
- Authors: Yichi Zhang, Zijian Zhu, Hang Su, Jun Zhu, Shibao Zheng, Yuan He, Hui
Xue
- Abstract summary: Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
- Score: 47.755808439588094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern object detectors are vulnerable to adversarial examples, which may
bring risks to real-world applications. The sparse attack is an important task
which, compared with the popular adversarial perturbation on the whole image,
needs to select the potential pixels that is generally regularized by an
$\ell_0$-norm constraint, and simultaneously optimize the corresponding
texture. The non-differentiability of $\ell_0$ norm brings challenges and many
works on attacking object detection adopted manually-designed patterns to
address them, which are meaningless and independent of objects, and therefore
lead to relatively poor attack performance.
In this paper, we propose Adversarial Semantic Contour (ASC), an MAP estimate
of a Bayesian formulation of sparse attack with a deceived prior of object
contour. The object contour prior effectively reduces the search space of pixel
selection and improves the attack by introducing more semantic bias. Extensive
experiments demonstrate that ASC can corrupt the prediction of 9 modern
detectors with different architectures (\e.g., one-stage, two-stage and
Transformer) by modifying fewer than 5\% of the pixels of the object area in
COCO in white-box scenario and around 10\% of those in black-box scenario. We
further extend the attack to datasets for autonomous driving systems to verify
the effectiveness. We conclude with cautions about contour being the common
weakness of object detectors with various architecture and the care needed in
applying them in safety-sensitive scenarios.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - AdvART: Adversarial Art for Camouflaged Object Detection Attacks [7.7889972735711925]
We propose a novel approach to generate naturalistic and inconspicuous adversarial patches.
Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space.
Our attack achieves superior success rate of up to 91.19% and 72%, respectively, in the digital world and when deployed in smart cameras at the edge.
arXiv Detail & Related papers (2023-03-03T06:28:05Z) - GLOW: Global Layout Aware Attacks for Object Detection [27.46902978168904]
Adversarial attacks aim to perturb images such that a predictor outputs incorrect results.
We present first approach that copes with various attack requests by generating global layout-aware adversarial attacks.
In experiment, we design multiple types of attack requests and validate our ideas on MS validation set.
arXiv Detail & Related papers (2023-02-27T22:01:34Z) - Object-Attentional Untargeted Adversarial Attack [11.800889173823945]
We propose an object-attentional adversarial attack method for untargeted attack.
Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection region from HVPNet.
Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA)
arXiv Detail & Related papers (2022-10-16T07:45:13Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Adversarial Semantic Contour for Object Detection [36.641649442633984]
We propose a novel method of Adversarial Semantic Contour (ASC) guided by object contour as prior.
Our proposed ASC can successfully mislead the mainstream object detectors including the SSD512, Yolov4, Mask RCNN, Faster RCNN, etc.
arXiv Detail & Related papers (2021-09-30T11:03:06Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.