Hardware faults that matter: Understanding and Estimating the safety
impact of hardware faults on object detection DNNs
- URL: http://arxiv.org/abs/2209.03225v1
- Date: Wed, 7 Sep 2022 15:27:09 GMT
- Title: Hardware faults that matter: Understanding and Estimating the safety
impact of hardware faults on object detection DNNs
- Authors: Syed Qutub, Florian Geissler, Yang Peng, Ralf Grafe, Michael
Paulitsch, Gereon Hinz, Alois Knoll
- Abstract summary: Object detection neural network models need to perform reliably in highly dynamic and safety-critical environments like automated driving or robotics.
Standard metrics based on average precision produce model vulnerability estimates at the object level rather than at an image level.
We propose a new metric IVMOD (Image-wise Metric for Object Detection) to quantify vulnerability based on an incorrect image-wise object detection.
- Score: 3.906089726778615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object detection neural network models need to perform reliably in highly
dynamic and safety-critical environments like automated driving or robotics.
Therefore, it is paramount to verify the robustness of the detection under
unexpected hardware faults like soft errors that can impact a systems
perception module. Standard metrics based on average precision produce model
vulnerability estimates at the object level rather than at an image level. As
we show in this paper, this does not provide an intuitive or representative
indicator of the safety-related impact of silent data corruption caused by bit
flips in the underlying memory but can lead to an over- or underestimation of
typical fault-induced hazards. With an eye towards safety-related real-time
applications, we propose a new metric IVMOD (Image-wise Vulnerability Metric
for Object Detection) to quantify vulnerability based on an incorrect
image-wise object detection due to false positive (FPs) or false negative (FNs)
objects, combined with a severity analysis. The evaluation of several
representative object detection models shows that even a single bit flip can
lead to a severe silent data corruption event with potentially critical safety
implications, with e.g., up to (much greater than) 100 FPs generated, or up to
approx. 90% of true positives (TPs) are lost in an image. Furthermore, with a
single stuck-at-1 fault, an entire sequence of images can be affected, causing
temporally persistent ghost detections that can be mistaken for actual objects
(covering up to approx. 83% of the image). Furthermore, actual objects in the
scene are continuously missed (up to approx. 64% of TPs are lost). Our work
establishes a detailed understanding of the safety-related vulnerability of
such critical workloads against hardware faults.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Understanding Impacts of Electromagnetic Signal Injection Attacks on Object Detection [33.819549876354515]
This paper quantifies and analyzes the impacts of cyber-physical attacks on object detection models in practice.
Images captured by image sensors may be affected by different factors in real applications, including cyber-physical attacks.
arXiv Detail & Related papers (2024-07-23T09:22:06Z) - Integrity Monitoring of 3D Object Detection in Automated Driving Systems using Raw Activation Patterns and Spatial Filtering [12.384452095533396]
The deep neural network (DNN) models are widely used for object detection in automated driving systems (ADS)
Yet, such models are prone to errors which can have serious safety implications.
Introspection and self-assessment models that aim to detect such errors are therefore of paramount importance for the safe deployment of ADS.
arXiv Detail & Related papers (2024-05-13T10:03:03Z) - Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Robo3D: Towards Robust and Reliable 3D Perception against Corruptions [58.306694836881235]
We present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios.
We consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure.
We propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.
arXiv Detail & Related papers (2023-03-30T17:59:17Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Towards a Safety Case for Hardware Fault Tolerance in Convolutional
Neural Networks Using Activation Range Supervision [1.7968112116887602]
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications.
We build a prototypical safety case for CNNs by demonstrating that range supervision represents a highly reliable fault detector.
We explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors.
arXiv Detail & Related papers (2021-08-16T11:13:55Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Online Monitoring of Object Detection Performance During Deployment [6.166295570030645]
We introduce a cascaded neural network that monitors the performance of the object detector by predicting the quality of its mean average precision (mAP) on a sliding window of the input frames.
We evaluate our proposed approach using different combinations of autonomous driving datasets and object detectors.
arXiv Detail & Related papers (2020-11-16T07:01:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.