AdvART: Adversarial Art for Camouflaged Object Detection Attacks
- URL: http://arxiv.org/abs/2303.01734v2
- Date: Fri, 9 Feb 2024 08:57:35 GMT
- Title: AdvART: Adversarial Art for Camouflaged Object Detection Attacks
- Authors: Amira Guesmi, Ioan Marius Bilasco, Muhammad Shafique, and Ihsen
Alouani
- Abstract summary: We propose a novel approach to generate naturalistic and inconspicuous adversarial patches.
Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space.
Our attack achieves superior success rate of up to 91.19% and 72%, respectively, in the digital world and when deployed in smart cameras at the edge.
- Score: 7.7889972735711925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical adversarial attacks pose a significant practical threat as it
deceives deep learning systems operating in the real world by producing
prominent and maliciously designed physical perturbations. Emphasizing the
evaluation of naturalness is crucial in such attacks, as humans can readily
detect and eliminate unnatural manipulations. To overcome this limitation,
recent work has proposed leveraging generative adversarial networks (GANs) to
generate naturalistic patches, which may not catch human's attention. However,
these approaches suffer from a limited latent space which leads to an
inevitable trade-off between naturalness and attack efficiency. In this paper,
we propose a novel approach to generate naturalistic and inconspicuous
adversarial patches. Specifically, we redefine the optimization problem by
introducing an additional loss term to the cost function. This term works as a
semantic constraint to ensure that the generated camouflage pattern holds
semantic meaning rather than arbitrary patterns. The additional term leverages
similarity metrics to construct a similarity loss that we optimize within the
global objective function. Our technique is based on directly manipulating the
pixel values in the patch, which gives higher flexibility and larger space
compared to the GAN-based techniques that are based on indirectly optimizing
the patch by modifying the latent vector. Our attack achieves superior success
rate of up to 91.19\% and 72\%, respectively, in the digital world and when
deployed in smart cameras at the edge compared to the GAN-based technique.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection [37.77615360932841]
Object detection techniques for Unmanned Aerial Vehicles rely on Deep Neural Networks (DNNs)
adversarial patches generated by existing algorithms in the UAV domain pay very little attention to the naturalness of adversarial patches.
We propose a new method named Environmental Matching Attack(EMA) to address the issue of optimizing the adversarial patch under the constraints of color.
arXiv Detail & Related papers (2024-05-13T09:56:57Z) - LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via
Latent Ensemble Attack [11.764601181046496]
Deepfakes, malicious visual contents created by generative models, pose an increasingly harmful threat to society.
To proactively mitigate deepfake damages, recent studies have employed adversarial perturbation to disrupt deepfake model outputs.
We propose a simple yet effective disruption method called Latent Ensemble ATtack (LEAT), which attacks the independent latent encoding process.
arXiv Detail & Related papers (2023-07-04T07:00:37Z) - DAP: A Dynamic Adversarial Patch for Evading Person Detectors [8.187375378049353]
This paper introduces a novel approach that produces a Dynamic Adversarial Patch (DAP)
DAP maintains a naturalistic appearance while optimizing attack efficiency and robustness to real-world transformations.
Experimental results demonstrate that the proposed approach outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-05-19T11:52:42Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.