Simulator-based explanation and debugging of hazard-triggering events in
DNN-based safety-critical systems
- URL: http://arxiv.org/abs/2204.00480v1
- Date: Fri, 1 Apr 2022 14:35:56 GMT
- Title: Simulator-based explanation and debugging of hazard-triggering events in
DNN-based safety-critical systems
- Authors: Hazem Fahmy, Fabrizio Pastore, Lionel Briand
- Abstract summary: Deep Neural Networks (DNNs) are used in safety-critical systems.
Engineers visually inspect all error-inducing images to determine common characteristics among them.
Such characteristics correspond to hazard-triggering events that are essential inputs for safety analysis.
We propose SEDE, a technique that generates readable descriptions for commonalities in error-inducing, real-world images.
- Score: 1.1240669509034296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When Deep Neural Networks (DNNs) are used in safety-critical systems,
engineers should determine the safety risks associated with DNN errors observed
during testing. For DNNs processing images, engineers visually inspect all
error-inducing images to determine common characteristics among them. Such
characteristics correspond to hazard-triggering events (e.g., low illumination)
that are essential inputs for safety analysis. Though informative, such
activity is expensive and error-prone.
To support such safety analysis practices, we propose SEDE, a technique that
generates readable descriptions for commonalities in error-inducing, real-world
images and improves the DNN through effective retraining. SEDE leverages the
availability of simulators, which are commonly used for cyber-physical systems.
SEDE relies on genetic algorithms to drive simulators towards the generation of
images that are similar to error-inducing, real-world images in the test set;
it then leverages rule learning algorithms to derive expressions that capture
commonalities in terms of simulator parameter values. The derived expressions
are then used to generate additional images to retrain and improve the DNN.
With DNNs performing in-car sensing tasks, SEDE successfully characterized
hazard-triggering events leading to a DNN accuracy drop. Also, SEDE enabled
retraining to achieve significant improvements in DNN accuracy, up to 18
percentage points.
Related papers
- Search-based DNN Testing and Retraining with GAN-enhanced Simulations [2.362412515574206]
In safety-critical systems, Deep Neural Networks (DNNs) are becoming a key component for computer vision tasks.
We propose to combine meta-heuristic search, used to explore the input space using simulators, with Generative Adversarial Networks (GANs) to transform the data generated by simulators into realistic input images.
arXiv Detail & Related papers (2024-06-19T09:05:16Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Special Session: Approximation and Fault Resiliency of DNN Accelerators [0.9126382223122612]
This paper explores the approximation and fault resiliency of Deep Neural Network accelerators.
We propose to use approximate (AxC) arithmetic circuits to emulate errors in hardware without performing fault injection on the DNN.
We also propose a fine-grain analysis of fault resiliency by examining fault propagation and masking in networks.
arXiv Detail & Related papers (2023-05-31T19:27:45Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - FitAct: Error Resilient Deep Neural Networks via Fine-Grained
Post-Trainable Activation Functions [0.05249805590164901]
Deep neural networks (DNNs) are increasingly being deployed in safety-critical systems such as personal healthcare devices and self-driving cars.
In this paper, we propose FitAct, a low-cost approach to enhance the error resilience of DNNs by deploying fine-grained post-trainable activation functions.
arXiv Detail & Related papers (2021-12-27T07:07:50Z) - FAT: Training Neural Networks for Reliable Inference Under Hardware
Faults [3.191587417198382]
We present a novel methodology called fault-aware training (FAT), which includes error modeling during neural network (NN) training, to make QNNs resilient to specific fault models on the device.
FAT has been validated for numerous classification tasks including CIFAR10, GTSRB, SVHN and ImageNet.
arXiv Detail & Related papers (2020-11-11T16:09:39Z) - Background Adaptive Faster R-CNN for Semi-Supervised Convolutional
Object Detection of Threats in X-Ray Images [64.39996451133268]
We present a semi-supervised approach for threat recognition which we call Background Adaptive Faster R-CNN.
This approach is a training method for two-stage object detectors which uses Domain Adaptation methods from the field of deep learning.
Two domain discriminators, one for discriminating object proposals and one for image features, are adversarially trained to prevent encoding domain-specific information.
This can reduce threat detection false alarm rates by matching the statistics of extracted features from hand-collected backgrounds to real world data.
arXiv Detail & Related papers (2020-10-02T21:05:13Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.