Understanding Impacts of Electromagnetic Signal Injection Attacks on Object Detection
- URL: http://arxiv.org/abs/2407.16327v1
- Date: Tue, 23 Jul 2024 09:22:06 GMT
- Title: Understanding Impacts of Electromagnetic Signal Injection Attacks on Object Detection
- Authors: Youqian Zhang, Chunxi Yang, Eugene Y. Fu, Qinhong Jiang, Chen Yan, Sze-Yiu Chau, Grace Ngai, Hong-Va Leong, Xiapu Luo, Wenyuan Xu,
- Abstract summary: This paper quantifies and analyzes the impacts of cyber-physical attacks on object detection models in practice.
Images captured by image sensors may be affected by different factors in real applications, including cyber-physical attacks.
- Score: 33.819549876354515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection can localize and identify objects in images, and it is extensively employed in critical multimedia applications such as security surveillance and autonomous driving. Despite the success of existing object detection models, they are often evaluated in ideal scenarios where captured images guarantee the accurate and complete representation of the detecting scenes. However, images captured by image sensors may be affected by different factors in real applications, including cyber-physical attacks. In particular, attackers can exploit hardware properties within the systems to inject electromagnetic interference so as to manipulate the images. Such attacks can cause noisy or incomplete information about the captured scene, leading to incorrect detection results, potentially granting attackers malicious control over critical functions of the systems. This paper presents a research work that comprehensively quantifies and analyzes the impacts of such attacks on state-of-the-art object detection models in practice. It also sheds light on the underlying reasons for the incorrect detection outcomes.
Related papers
- Hardware faults that matter: Understanding and Estimating the safety
impact of hardware faults on object detection DNNs [3.906089726778615]
Object detection neural network models need to perform reliably in highly dynamic and safety-critical environments like automated driving or robotics.
Standard metrics based on average precision produce model vulnerability estimates at the object level rather than at an image level.
We propose a new metric IVMOD (Image-wise Metric for Object Detection) to quantify vulnerability based on an incorrect image-wise object detection.
arXiv Detail & Related papers (2022-09-07T15:27:09Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in
CMOS Image Sensors [21.5487020124302]
A camera's electronic rolling shutter can be exploited to inject fine-grained image disruptions.
We show how an adversary can modulate the laser to hide up to 75% of objects perceived by state-of-the-art detectors.
Our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems.
arXiv Detail & Related papers (2021-01-25T11:14:25Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Online Monitoring of Object Detection Performance During Deployment [6.166295570030645]
We introduce a cascaded neural network that monitors the performance of the object detector by predicting the quality of its mean average precision (mAP) on a sliding window of the input frames.
We evaluate our proposed approach using different combinations of autonomous driving datasets and object detectors.
arXiv Detail & Related papers (2020-11-16T07:01:43Z) - Understanding Object Detection Through An Adversarial Lens [14.976840260248913]
This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
arXiv Detail & Related papers (2020-07-11T18:41:47Z) - GhostImage: Remote Perception Attacks against Camera-based Image
Classification Systems [6.637193297008101]
In vision-based object classification systems imaging sensors perceive the environment and machine learning is then used to detect and classify objects for decision-making purposes.
We demonstrate how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object.
arXiv Detail & Related papers (2020-01-21T21:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.