Adversarial Infrared Geometry: Using Geometry to Perform Adversarial
Attack against Infrared Pedestrian Detectors
- URL: http://arxiv.org/abs/2403.03674v1
- Date: Wed, 6 Mar 2024 12:55:21 GMT
- Title: Adversarial Infrared Geometry: Using Geometry to Perform Adversarial
Attack against Infrared Pedestrian Detectors
- Authors: Kalibinuer Tiliwalidi
- Abstract summary: We propose a novel infrared physical attack termed Adrial Infrared Geometry (textversabfAdvIG)
In digital attack experiments, line, triangle, and ellipse patterns achieve attack success rates of 93.1%, 86.8%, and 100.0%, respectively.
On average, the line, triangle, and ellipse achieve attack success rates of 61.1%, 61.2%, and 96.2%, respectively.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, infrared imaging technology enjoys widespread usage, with infrared
object detection technology experiencing a surge in prominence. While previous
studies have delved into physical attacks on infrared object detectors, the
implementation of these techniques remains complex. For instance, some
approaches entail the use of bulb boards or infrared QR suits as perturbations
to execute attacks, which entail costly optimization and cumbersome deployment
processes. Other methodologies involve the utilization of irregular aerogel as
physical perturbations for infrared attacks, albeit at the expense of
optimization expenses and perceptibility issues. In this study, we propose a
novel infrared physical attack termed Adversarial Infrared Geometry
(\textbf{AdvIG}), which facilitates efficient black-box query attacks by
modeling diverse geometric shapes (lines, triangles, ellipses) and optimizing
their physical parameters using Particle Swarm Optimization (PSO). Extensive
experiments are conducted to evaluate the effectiveness, stealthiness, and
robustness of AdvIG. In digital attack experiments, line, triangle, and ellipse
patterns achieve attack success rates of 93.1\%, 86.8\%, and 100.0\%,
respectively, with average query times of 71.7, 113.1, and 2.57, respectively,
thereby confirming the efficiency of AdvIG. Physical attack experiments are
conducted to assess the attack success rate of AdvIG at different distances. On
average, the line, triangle, and ellipse achieve attack success rates of
61.1\%, 61.2\%, and 96.2\%, respectively. Further experiments are conducted to
comprehensively analyze AdvIG, including ablation experiments, transfer attack
experiments, and adversarial defense mechanisms. Given the superior performance
of our method as a simple and efficient black-box adversarial attack in both
digital and physical environments, we advocate for widespread attention to
AdvIG.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Multi-View Black-Box Physical Attacks on Infrared Pedestrian Detectors Using Adversarial Infrared Grid [0.0]
Infrared object detectors are vital in modern technological applications but are susceptible to adversarial attacks, posing significant security threats.
Previous studies using physical perturbations like light bulb arrays for white-box attacks, or hot and cold patches for black-box attacks, have proven impractical or limited in multi-view support.
We propose the Adversarial Infrared Grid (AdvGrid), which models perturbations in a grid format and uses a genetic algorithm for black-box optimization.
arXiv Detail & Related papers (2024-07-01T10:38:08Z) - Physical Backdoor: Towards Temperature-based Backdoor Attacks in the Physical World [47.76657100827679]
We introduce two novel types of backdoor attacks on thermal infrared object detection (TIOD)
Key factors influencing trigger design include temperature, size, material, and concealment.
In the digital realm, we evaluate our approach using benchmark datasets for TIOD, achieving an Attack Success Rate (ASR) of up to 98.21%.
arXiv Detail & Related papers (2024-04-30T10:03:26Z) - Adversarial Infrared Curves: An Attack on Infrared Pedestrian Detectors
in the Physical World [0.0]
Existing approaches, like white-box infrared attacks using bulb boards and QR suits, lack realism and stealthiness.
We propose Adversarial Infrared Curves (AdvIC) to bridge these gaps.
Our experiments confirm AdvIC's effectiveness, achieving 94.8% and 67.2% attack success rates for digital and physical attacks, respectively.
arXiv Detail & Related papers (2023-12-21T12:21:57Z) - Adversarial Infrared Blocks: A Multi-view Black-box Attack to Thermal
Infrared Detectors in Physical World [4.504479592538401]
We propose a novel physical attack called adversarial infrared blocks (AdvIB)
By optimizing the physical parameters of the adversarial infrared blocks, this method can execute a stealthy black-box attack on thermal imaging system from various angles.
For stealthiness, our method involves attaching the adversarial infrared block to the inside of clothing, enhancing its stealthiness.
arXiv Detail & Related papers (2023-04-21T02:53:56Z) - Physically Adversarial Infrared Patches with Learnable Shapes and
Locations [1.1172382217477126]
We propose a physically feasible infrared attack method called "adversarial infrared patches"
Considering the imaging mechanism of infrared cameras by capturing objects' thermal radiation, adversarial infrared patches conduct attacks by attaching a patch of thermal insulation materials on the target object to manipulate its thermal distribution.
We verify adversarial infrared patches in different object detection tasks with various object detectors.
arXiv Detail & Related papers (2023-03-24T09:11:36Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable
Design [60.97064635095259]
textscHotCold Block is a novel physical attack for infrared detectors that hide persons utilizing the wearable Warming Paste and Cooling Paste.
By attaching these readily available temperature-controlled materials to the body, textscHotCold Block evades human eyes efficiently.
arXiv Detail & Related papers (2022-12-12T05:23:11Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - AdvMind: Inferring Adversary Intent of Black-Box Attacks [66.19339307119232]
We present AdvMind, a new class of estimation models that infer the adversary intent of black-box adversarial attacks in a robust manner.
On average AdvMind detects the adversary intent with over 75% accuracy after observing less than 3 query batches.
arXiv Detail & Related papers (2020-06-16T22:04:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.