Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering
- URL: http://arxiv.org/abs/2201.05077v2
- Date: Fri, 14 Jan 2022 14:12:01 GMT
- Title: Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering
- Authors: Mohammed Oualid Attaoui, Hazem Fahmy, Fabrizio Pastore, and Lionel
Briand
- Abstract summary: We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
- Score: 0.9590956574213348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have demonstrated superior performance over
classical machine learning to support many features in safety-critical systems.
Although DNNs are now widely used in such systems (e.g., self driving cars),
there is limited progress regarding automated support for functional safety
analysis in DNN-based systems. For example, the identification of root causes
of errors, to enable both risk analysis and DNN retraining, remains an open
problem. In this paper, we propose SAFE, a black-box approach to automatically
characterize the root causes of DNN errors. SAFE relies on a transfer learning
model pre-trained on ImageNet to extract the features from error-inducing
images. It then applies a density-based clustering algorithm to detect
arbitrary shaped clusters of images modeling plausible causes of error. Last,
clusters are used to effectively retrain and improve the DNN. The black-box
nature of SAFE is motivated by our objective not to require changes or even
access to the DNN internals to facilitate adoption.
Experimental results show the superior ability of SAFE in identifying
different root causes of DNN errors based on case studies in the automotive
domain. It also yields significant improvements in DNN accuracy after
retraining, while saving significant execution time and memory when compared to
alternatives.
Related papers
- Special Session: Approximation and Fault Resiliency of DNN Accelerators [0.9126382223122612]
This paper explores the approximation and fault resiliency of Deep Neural Network accelerators.
We propose to use approximate (AxC) arithmetic circuits to emulate errors in hardware without performing fault injection on the DNN.
We also propose a fine-grain analysis of fault resiliency by examining fault propagation and masking in networks.
arXiv Detail & Related papers (2023-05-31T19:27:45Z) - Fault-Aware Design and Training to Enhance DNNs Reliability with
Zero-Overhead [67.87678914831477]
Deep Neural Networks (DNNs) enable a wide series of technological advancements.
Recent findings indicate that transient hardware faults may corrupt the models prediction dramatically.
In this work, we propose to tackle the reliability issue both at training and model design time.
arXiv Detail & Related papers (2022-05-28T13:09:30Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Simulator-based explanation and debugging of hazard-triggering events in
DNN-based safety-critical systems [1.1240669509034296]
Deep Neural Networks (DNNs) are used in safety-critical systems.
Engineers visually inspect all error-inducing images to determine common characteristics among them.
Such characteristics correspond to hazard-triggering events that are essential inputs for safety analysis.
We propose SEDE, a technique that generates readable descriptions for commonalities in error-inducing, real-world images.
arXiv Detail & Related papers (2022-04-01T14:35:56Z) - FitAct: Error Resilient Deep Neural Networks via Fine-Grained
Post-Trainable Activation Functions [0.05249805590164901]
Deep neural networks (DNNs) are increasingly being deployed in safety-critical systems such as personal healthcare devices and self-driving cars.
In this paper, we propose FitAct, a low-cost approach to enhance the error resilience of DNNs by deploying fine-grained post-trainable activation functions.
arXiv Detail & Related papers (2021-12-27T07:07:50Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Deep Serial Number: Computational Watermarking for DNN Intellectual
Property Protection [53.40245698216239]
DSN (Deep Serial Number) is a watermarking algorithm designed specifically for deep neural networks (DNNs)
Inspired by serial numbers in safeguarding conventional software IP, we propose the first implementation of serial number embedding within DNNs.
arXiv Detail & Related papers (2020-11-17T21:42:40Z) - Continuous Safety Verification of Neural Networks [1.7056768055368385]
This paper considers approaches to transfer results established in the previous DNN safety verification problem to a modified problem setting.
The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
arXiv Detail & Related papers (2020-10-12T13:28:04Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Supporting DNN Safety Analysis and Retraining through Heatmap-based
Unsupervised Learning [1.6414392145248926]
We propose HUDD, an approach that automatically supports the identification of root causes for DNN errors.
HUDD identifies root causes by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the outcome.
Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters.
arXiv Detail & Related papers (2020-02-03T16:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.