Targeted Physical Evasion Attacks in the Near-Infrared Domain
- URL: http://arxiv.org/abs/2509.02042v1
- Date: Tue, 02 Sep 2025 07:37:10 GMT
- Title: Targeted Physical Evasion Attacks in the Near-Infrared Domain
- Authors: Pascal Zimmer, Simon Lachnit, Alexander Jan Zielinski, Ghassan Karame,
- Abstract summary: We propose a stealthy, and cost-effective attack to generate both targeted and untargeted infrared perturbations.<n>By projecting perturbations from a transparent film onto the target object with an off-the-shelf infrared flashlight, our approach is the first to reliably mount laser-free targeted attacks in the infrared domain.
- Score: 44.41293301858757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A number of attacks rely on infrared light sources or heat-absorbing material to imperceptibly fool systems into misinterpreting visual input in various image recognition applications. However, almost all existing approaches can only mount untargeted attacks and require heavy optimizations due to the use-case-specific constraints, such as location and shape. In this paper, we propose a novel, stealthy, and cost-effective attack to generate both targeted and untargeted adversarial infrared perturbations. By projecting perturbations from a transparent film onto the target object with an off-the-shelf infrared flashlight, our approach is the first to reliably mount laser-free targeted attacks in the infrared domain. Extensive experiments on traffic signs in the digital and physical domains show that our approach is robust and yields higher attack success rates in various attack scenarios across bright lighting conditions, distances, and angles compared to prior work. Equally important, our attack is highly cost-effective, requiring less than US\$50 and a few tens of seconds for deployment. Finally, we propose a novel segmentation-based detection that thwarts our attack with an F1-score of up to 99%.
Related papers
- Multi-View Black-Box Physical Attacks on Infrared Pedestrian Detectors Using Adversarial Infrared Grid [0.0]
Infrared object detectors are vital in modern technological applications but are susceptible to adversarial attacks, posing significant security threats.
Previous studies using physical perturbations like light bulb arrays for white-box attacks, or hot and cold patches for black-box attacks, have proven impractical or limited in multi-view support.
We propose the Adversarial Infrared Grid (AdvGrid), which models perturbations in a grid format and uses a genetic algorithm for black-box optimization.
arXiv Detail & Related papers (2024-07-01T10:38:08Z) - Adversarial Infrared Curves: An Attack on Infrared Pedestrian Detectors
in the Physical World [0.0]
Existing approaches, like white-box infrared attacks using bulb boards and QR suits, lack realism and stealthiness.
We propose Adversarial Infrared Curves (AdvIC) to bridge these gaps.
Our experiments confirm AdvIC's effectiveness, achieving 94.8% and 67.2% attack success rates for digital and physical attacks, respectively.
arXiv Detail & Related papers (2023-12-21T12:21:57Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - Adversarial Infrared Blocks: A Multi-view Black-box Attack to Thermal
Infrared Detectors in Physical World [4.504479592538401]
We propose a novel physical attack called adversarial infrared blocks (AdvIB)
By optimizing the physical parameters of the adversarial infrared blocks, this method can execute a stealthy black-box attack on thermal imaging system from various angles.
For stealthiness, our method involves attaching the adversarial infrared block to the inside of clothing, enhancing its stealthiness.
arXiv Detail & Related papers (2023-04-21T02:53:56Z) - Physically Adversarial Infrared Patches with Learnable Shapes and
Locations [1.1172382217477126]
We propose a physically feasible infrared attack method called "adversarial infrared patches"
Considering the imaging mechanism of infrared cameras by capturing objects' thermal radiation, adversarial infrared patches conduct attacks by attaching a patch of thermal insulation materials on the target object to manipulate its thermal distribution.
We verify adversarial infrared patches in different object detection tasks with various object detectors.
arXiv Detail & Related papers (2023-03-24T09:11:36Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection [89.08832589750003]
We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
arXiv Detail & Related papers (2022-01-22T06:00:17Z) - Local Black-box Adversarial Attacks: A Query Efficient Approach [64.98246858117476]
Adrial attacks have threatened the application of deep neural networks in security-sensitive scenarios.
We propose a novel framework to perturb the discriminative areas of clean examples only within limited queries in black-box attacks.
We conduct extensive experiments to show that our framework can significantly improve the query efficiency during black-box perturbing with a high attack success rate.
arXiv Detail & Related papers (2021-01-04T15:32:16Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z) - RayS: A Ray Searching Method for Hard-label Adversarial Attack [99.72117609513589]
We present the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency.
RayS attack can also be used as a sanity check for possible "falsely robust" models.
arXiv Detail & Related papers (2020-06-23T07:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.