Adversarial Infrared Blocks: A Multi-view Black-box Attack to Thermal
Infrared Detectors in Physical World
- URL: http://arxiv.org/abs/2304.10712v4
- Date: Fri, 28 Jul 2023 16:37:07 GMT
- Title: Adversarial Infrared Blocks: A Multi-view Black-box Attack to Thermal
Infrared Detectors in Physical World
- Authors: Chengyin Hu, Weiwen Shi, Tingsong Jiang, Wen Yao, Ling Tian, Xiaoqian
Chen
- Abstract summary: We propose a novel physical attack called adversarial infrared blocks (AdvIB)
By optimizing the physical parameters of the adversarial infrared blocks, this method can execute a stealthy black-box attack on thermal imaging system from various angles.
For stealthiness, our method involves attaching the adversarial infrared block to the inside of clothing, enhancing its stealthiness.
- Score: 4.504479592538401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared imaging systems have a vast array of potential applications in
pedestrian detection and autonomous driving, and their safety performance is of
great concern. However, few studies have explored the safety of infrared
imaging systems in real-world settings. Previous research has used physical
perturbations such as small bulbs and thermal "QR codes" to attack infrared
imaging detectors, but such methods are highly visible and lack stealthiness.
Other researchers have used hot and cold blocks to deceive infrared imaging
detectors, but this method is limited in its ability to execute attacks from
various angles. To address these shortcomings, we propose a novel physical
attack called adversarial infrared blocks (AdvIB). By optimizing the physical
parameters of the adversarial infrared blocks, this method can execute a
stealthy black-box attack on thermal imaging system from various angles. We
evaluate the proposed method based on its effectiveness, stealthiness, and
robustness. Our physical tests show that the proposed method achieves a success
rate of over 80% under most distance and angle conditions, validating its
effectiveness. For stealthiness, our method involves attaching the adversarial
infrared block to the inside of clothing, enhancing its stealthiness.
Additionally, we test the proposed method on advanced detectors, and
experimental results demonstrate an average attack success rate of 51.2%,
proving its robustness. Overall, our proposed AdvIB method offers a promising
avenue for conducting stealthy, effective and robust black-box attacks on
thermal imaging system, with potential implications for real-world safety and
security applications.
Related papers
- Multi-View Black-Box Physical Attacks on Infrared Pedestrian Detectors Using Adversarial Infrared Grid [0.0]
Infrared object detectors are vital in modern technological applications but are susceptible to adversarial attacks, posing significant security threats.
Previous studies using physical perturbations like light bulb arrays for white-box attacks, or hot and cold patches for black-box attacks, have proven impractical or limited in multi-view support.
We propose the Adversarial Infrared Grid (AdvGrid), which models perturbations in a grid format and uses a genetic algorithm for black-box optimization.
arXiv Detail & Related papers (2024-07-01T10:38:08Z) - Physical Backdoor: Towards Temperature-based Backdoor Attacks in the Physical World [47.76657100827679]
We introduce two novel types of backdoor attacks on thermal infrared object detection (TIOD)
Key factors influencing trigger design include temperature, size, material, and concealment.
In the digital realm, we evaluate our approach using benchmark datasets for TIOD, achieving an Attack Success Rate (ASR) of up to 98.21%.
arXiv Detail & Related papers (2024-04-30T10:03:26Z) - Adversarial Infrared Geometry: Using Geometry to Perform Adversarial
Attack against Infrared Pedestrian Detectors [0.0]
We propose a novel infrared physical attack termed Adrial Infrared Geometry (textversabfAdvIG)
In digital attack experiments, line, triangle, and ellipse patterns achieve attack success rates of 93.1%, 86.8%, and 100.0%, respectively.
On average, the line, triangle, and ellipse achieve attack success rates of 61.1%, 61.2%, and 96.2%, respectively.
arXiv Detail & Related papers (2024-03-06T12:55:21Z) - Adversarial Infrared Curves: An Attack on Infrared Pedestrian Detectors
in the Physical World [0.0]
Existing approaches, like white-box infrared attacks using bulb boards and QR suits, lack realism and stealthiness.
We propose Adversarial Infrared Curves (AdvIC) to bridge these gaps.
Our experiments confirm AdvIC's effectiveness, achieving 94.8% and 67.2% attack success rates for digital and physical attacks, respectively.
arXiv Detail & Related papers (2023-12-21T12:21:57Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - Physically Adversarial Infrared Patches with Learnable Shapes and
Locations [1.1172382217477126]
We propose a physically feasible infrared attack method called "adversarial infrared patches"
Considering the imaging mechanism of infrared cameras by capturing objects' thermal radiation, adversarial infrared patches conduct attacks by attaching a patch of thermal insulation materials on the target object to manipulate its thermal distribution.
We verify adversarial infrared patches in different object detection tasks with various object detectors.
arXiv Detail & Related papers (2023-03-24T09:11:36Z) - HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable
Design [60.97064635095259]
textscHotCold Block is a novel physical attack for infrared detectors that hide persons utilizing the wearable Warming Paste and Cooling Paste.
By attaching these readily available temperature-controlled materials to the body, textscHotCold Block evades human eyes efficiently.
arXiv Detail & Related papers (2022-12-12T05:23:11Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Exploring Thermal Images for Object Detection in Underexposure Regions
for Autonomous Driving [67.69430435482127]
Underexposure regions are vital to construct a complete perception of the surroundings for safe autonomous driving.
The availability of thermal cameras has provided an essential alternate to explore regions where other optical sensors lack in capturing interpretable signals.
This work proposes a domain adaptation framework which employs a style transfer technique for transfer learning from visible spectrum images to thermal images.
arXiv Detail & Related papers (2020-06-01T09:59:09Z) - Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario [50.36920272392624]
Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
arXiv Detail & Related papers (2020-03-18T03:04:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.