IAP: Invisible Adversarial Patch Attack through Perceptibility-Aware Localization and Perturbation Optimization
- URL: http://arxiv.org/abs/2507.06856v1
- Date: Wed, 09 Jul 2025 13:58:40 GMT
- Title: IAP: Invisible Adversarial Patch Attack through Perceptibility-Aware Localization and Perturbation Optimization
- Authors: Subrat Kishore Dutta, Xiao Zhang,
- Abstract summary: adversarial patches can drastically change the prediction of computer vision models.<n>We introduce IAP, a novel attack framework that generates highly invisible adversarial patches.<n>IAP consistently achieves competitive attack success rates in targeted settings.
- Score: 3.096869664709865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite modifying only a small localized input region, adversarial patches can drastically change the prediction of computer vision models. However, prior methods either cannot perform satisfactorily under targeted attack scenarios or fail to produce contextually coherent adversarial patches, causing them to be easily noticeable by human examiners and insufficiently stealthy against automatic patch defenses. In this paper, we introduce IAP, a novel attack framework that generates highly invisible adversarial patches based on perceptibility-aware localization and perturbation optimization schemes. Specifically, IAP first searches for a proper location to place the patch by leveraging classwise localization and sensitivity maps, balancing the susceptibility of patch location to both victim model prediction and human visual system, then employs a perceptibility-regularized adversarial loss and a gradient update rule that prioritizes color constancy for optimizing invisible perturbations. Comprehensive experiments across various image benchmarks and model architectures demonstrate that IAP consistently achieves competitive attack success rates in targeted settings with significantly improved patch invisibility compared to existing baselines. In addition to being highly imperceptible to humans, IAP is shown to be stealthy enough to render several state-of-the-art patch defenses ineffective.
Related papers
- Optimization-Free Patch Attack on Stereo Depth Estimation [51.792201754821804]
We present PatchHunter, the first adversarial patch attack against Stereo Depth Estimation (SDE)<n>PatchHunter formulates patch generation as a reinforcement learning-driven search over a structured space of visual patterns crafted to disrupt SDE assumptions.<n>We validate PatchHunter across three levels: the KITTI dataset, the CARLA simulator, and real-world vehicle deployment.
arXiv Detail & Related papers (2025-06-21T08:23:02Z) - Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness [52.07366900097567]
Backdoor attacks pose a severe threat to deep neural networks (DNN)<n>Existing 3D point cloud backdoor attacks primarily rely on sample-wise global modifications.<n>We propose Stealthy Patch-Wise Backdoor Attack (SPBA), which employs the first patch-wise trigger for 3D point clouds.
arXiv Detail & Related papers (2025-03-12T12:30:59Z) - Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection [37.77615360932841]
Object detection techniques for Unmanned Aerial Vehicles rely on Deep Neural Networks (DNNs)
adversarial patches generated by existing algorithms in the UAV domain pay very little attention to the naturalness of adversarial patches.
We propose a new method named Environmental Matching Attack(EMA) to address the issue of optimizing the adversarial patch under the constraints of color.
arXiv Detail & Related papers (2024-05-13T09:56:57Z) - Query-Efficient Decision-based Black-Box Patch Attack [36.043297146652414]
We propose a differential evolutionary algorithm named DevoPatch for query-efficient decision-based patch attacks.
DevoPatch outperforms the state-of-the-art black-box patch attacks in terms of patch area and attack success rate.
We conduct the vulnerability evaluation of ViT and on image classification in the decision-based patch attack setting for the first time.
arXiv Detail & Related papers (2023-07-02T05:15:43Z) - Distributional Modeling for Location-Aware Adversarial Patches [28.466804363780557]
Distribution-d Adversarial Patch (DOPatch) is a novel method that optimize a multimodal distribution of adversarial locations.
DOPatch can generate diverse adversarial samples by characterizing the distribution of adversarial locations.
We evaluate DOPatch on various face recognition and image recognition tasks and demonstrate its superiority and efficiency over existing methods.
arXiv Detail & Related papers (2023-06-28T12:01:50Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.