Seeing Isn't Believing: Context-Aware Adversarial Patch Synthesis via Conditional GAN
- URL: http://arxiv.org/abs/2509.22836v1
- Date: Fri, 26 Sep 2025 18:39:21 GMT
- Title: Seeing Isn't Believing: Context-Aware Adversarial Patch Synthesis via Conditional GAN
- Authors: Roie Kazoom, Alon Goldberg, Hodaya Cohen, Ofer Hadar,
- Abstract summary: We introduce a novel framework for fully controllable adversarial patch generation.<n>The attacker can freely choose both the input image x and the target class y target, thereby dictating the exact misclassification outcome.<n>Our method combines a generative U-Net design with Grad-CAM-guided patch placement, enabling semantic-aware localization.
- Score: 2.02409171087469
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial patch attacks pose a severe threat to deep neural networks, yet most existing approaches rely on unrealistic white-box assumptions, untargeted objectives, or produce visually conspicuous patches that limit real-world applicability. In this work, we introduce a novel framework for fully controllable adversarial patch generation, where the attacker can freely choose both the input image x and the target class y target, thereby dictating the exact misclassification outcome. Our method combines a generative U-Net design with Grad-CAM-guided patch placement, enabling semantic-aware localization that maximizes attack effectiveness while preserving visual realism. Extensive experiments across convolutional networks (DenseNet-121, ResNet-50) and vision transformers (ViT-B/16, Swin-B/16, among others) demonstrate that our approach achieves state-of-the-art performance across all settings, with attack success rates (ASR) and target-class success (TCS) consistently exceeding 99%. Importantly, we show that our method not only outperforms prior white-box attacks and untargeted baselines, but also surpasses existing non-realistic approaches that produce detectable artifacts. By simultaneously ensuring realism, targeted control, and black-box applicability-the three most challenging dimensions of patch-based attacks-our framework establishes a new benchmark for adversarial robustness research, bridging the gap between theoretical attack strength and practical stealthiness.
Related papers
- IU: Imperceptible Universal Backdoor Attack [10.117347558468166]
We introduce a novel imperceptible universal backdoor attack that simultaneously controls all target classes with minimal poisoning while preserving stealth.<n>Our key idea is to leverage graph convolutional networks (GCNs) to model inter-class relationships and generate class-specific perturbations that are both effective and visually invisible.
arXiv Detail & Related papers (2026-02-28T15:34:59Z) - Real-World Transferable Adversarial Attack on Face-Recognition Systems [45.6754000057234]
We introduce GaP (Gaussian Patch), a novel method to generate a universal, physically transferable adversarial patch under a strict black-box setting.<n>Our work highlights a practical and severe vulnerability, proving that robust, transferable attacks can be crafted with limited knowledge of the target system.
arXiv Detail & Related papers (2025-09-27T09:09:06Z) - DOPA: Stealthy and Generalizable Backdoor Attacks from a Single Client under Challenging Federated Constraints [2.139012072214621]
Federated Learning (FL) is increasingly adopted for privacy-preserving collaborative training, but its decentralized nature makes it susceptible to backdoor attacks.<n>Existing attack methods, however, often rely on idealized assumptions and fail to remain effective under real-world constraints.<n>We propose DOPA, a novel framework that simulates heterogeneous local training dynamics and seeks consensus across divergent optimization trajectories to craft universally effective and stealthy backdoor triggers.
arXiv Detail & Related papers (2025-08-20T08:39:12Z) - Pixel-Optimization-Free Patch Attack on Stereo Depth Estimation [37.97727884936262]
We build a unified framework that extends pixel-optimization attacks to four stereo-matching stages.<n>We propose PatchHunter, the first pixel-optimization-free attack.<n>On KITTI, PatchHunter outperforms pixel-level attacks in both effectiveness and black-box transferability.
arXiv Detail & Related papers (2025-06-21T08:23:02Z) - AdvART: Adversarial Art for Camouflaged Object Detection Attacks [7.7889972735711925]
We propose a novel approach to generate naturalistic and inconspicuous adversarial patches.
Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space.
Our attack achieves superior success rate of up to 91.19% and 72%, respectively, in the digital world and when deployed in smart cameras at the edge.
arXiv Detail & Related papers (2023-03-03T06:28:05Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep
Neural Network Systems [15.982408142401072]
Deep neural networks are vulnerable to attacks from adversarial inputs and, more recently, Trojans to misguide or hijack the decision of the model.
A TnT is universal because any input image captured with a TnT in the scene will: i) misguide a network (untargeted attack); or ii) force the network to make a malicious decision.
We show a generalization of the attack to create patches achieving higher attack success rates than existing state-of-the-art methods.
arXiv Detail & Related papers (2021-11-19T01:35:10Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.