Adversarial Attacks on Event-Based Pedestrian Detectors: A Physical Approach
- URL: http://arxiv.org/abs/2503.00377v1
- Date: Sat, 01 Mar 2025 07:06:52 GMT
- Title: Adversarial Attacks on Event-Based Pedestrian Detectors: A Physical Approach
- Authors: Guixu Lin, Muyao Niu, Qingtian Zhu, Zhengwei Yin, Zhuoxiao Li, Shengfeng He, Yinqiang Zheng,
- Abstract summary: This study is the first to explore physical adversarial attacks on event-driven pedestrian detectors.<n>We develop an end-to-end adversarial framework in the digital domain, framing the design of adversarial clothing textures as a 2D texture optimization problem.<n>Our results demonstrate that the textures identified in the digital domain possess strong adversarial properties.
- Score: 44.952828512869644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras, known for their low latency and high dynamic range, show great potential in pedestrian detection applications. However, while recent research has primarily focused on improving detection accuracy, the robustness of event-based visual models against physical adversarial attacks has received limited attention. For example, adversarial physical objects, such as specific clothing patterns or accessories, can exploit inherent vulnerabilities in these systems, leading to misdetections or misclassifications. This study is the first to explore physical adversarial attacks on event-driven pedestrian detectors, specifically investigating whether certain clothing patterns worn by pedestrians can cause these detectors to fail, effectively rendering them unable to detect the person. To address this, we developed an end-to-end adversarial framework in the digital domain, framing the design of adversarial clothing textures as a 2D texture optimization problem. By crafting an effective adversarial loss function, the framework iteratively generates optimal textures through backpropagation. Our results demonstrate that the textures identified in the digital domain possess strong adversarial properties. Furthermore, we translated these digitally optimized textures into physical clothing and tested them in real-world scenarios, successfully demonstrating that the designed textures significantly degrade the performance of event-based pedestrian detection models. This work highlights the vulnerability of such models to physical adversarial attacks.
Related papers
- Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors [18.23151352064318]
This paper introduces an inconspicuous adversarial trigger that operates outside the bounding boxes, rendering the object undetectable to the model.
We further enhance this approach by proposing the Feature Guidance (FG) technique and the Universal Auto-PGD (UAPGD) optimization strategy for crafting high-quality triggers.
The effectiveness of our method is validated through extensive empirical testing, demonstrating its high performance in both digital and physical environments.
arXiv Detail & Related papers (2024-10-14T02:15:48Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors [0.190365714903665]
We propose a dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously.
In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images.
In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective.
arXiv Detail & Related papers (2022-10-17T09:07:52Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models [2.2588953434934416]
Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models.
Recent work has demonstrated that these attacks can successfully transfer to the physical world.
We further test the efficacy of adversarial patch attacks in the physical world under more challenging conditions.
arXiv Detail & Related papers (2022-06-25T20:05:11Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - CCA: Exploring the Possibility of Contextual Camouflage Attack on Object
Detection [16.384831731988204]
We propose a contextual camouflage attack (CCA) algorithm to in-fluence the performance of object detectors.
In this paper, we usean evolutionary search strategy and adversarial machine learningin interactions with a photo-realistic simulated environment.
Theproposed camouflages are validated effective to most of the state-of-the-art object detectors.
arXiv Detail & Related papers (2020-08-19T06:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.