Supporting DNN Safety Analysis and Retraining through Heatmap-based
Unsupervised Learning
- URL: http://arxiv.org/abs/2002.00863v4
- Date: Thu, 22 Apr 2021 12:10:27 GMT
- Title: Supporting DNN Safety Analysis and Retraining through Heatmap-based
Unsupervised Learning
- Authors: Hazem Fahmy, Fabrizio Pastore, Mojtaba Bagherzadeh, Lionel Briand
- Abstract summary: We propose HUDD, an approach that automatically supports the identification of root causes for DNN errors.
HUDD identifies root causes by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the outcome.
Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters.
- Score: 1.6414392145248926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are increasingly important in safety-critical
systems, for example in their perception layer to analyze images.
Unfortunately, there is a lack of methods to ensure the functional safety of
DNN-based components. We observe three major challenges with existing practices
regarding DNNs in safety-critical systems: (1) scenarios that are
underrepresented in the test set may lead to serious safety violation risks,
but may, however, remain unnoticed; (2) characterizing such high-risk scenarios
is critical for safety analysis; (3) retraining DNNs to address these risks is
poorly supported when causes of violations are difficult to determine. To
address these problems in the context of DNNs analyzing images, we propose
HUDD, an approach that automatically supports the identification of root causes
for DNN errors. HUDD identifies root causes by applying a clustering algorithm
to heatmaps capturing the relevance of every DNN neuron on the DNN outcome.
Also, HUDD retrains DNNs with images that are automatically selected based on
their relatedness to the identified image clusters. We evaluated HUDD with DNNs
from the automotive domain. HUDD was able to identify all the distinct root
causes of DNN errors, thus supporting safety analysis. Also, our retraining
approach has shown to be more effective at improving DNN accuracy than existing
approaches.
Related papers
- The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - HUDD: A tool to debug DNNs for safety analysis [1.1240669509034296]
We present HUDD, a tool that supports safety analysis practices for systems enabled by Deep Neural Networks (DNNs)
HUDD identifies root causes by applying a clustering algorithm to matrices.
HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters.
arXiv Detail & Related papers (2022-10-15T18:40:52Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.