HUDD: A tool to debug DNNs for safety analysis
- URL: http://arxiv.org/abs/2210.08356v1
- Date: Sat, 15 Oct 2022 18:40:52 GMT
- Title: HUDD: A tool to debug DNNs for safety analysis
- Authors: Hazem Fahmy, Fabrizio Pastore, Lionel Briand
- Abstract summary: We present HUDD, a tool that supports safety analysis practices for systems enabled by Deep Neural Networks (DNNs)
HUDD identifies root causes by applying a clustering algorithm to matrices.
HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters.
- Score: 1.1240669509034296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present HUDD, a tool that supports safety analysis practices for systems
enabled by Deep Neural Networks (DNNs) by automatically identifying the root
causes for DNN errors and retraining the DNN. HUDD stands for Heatmap-based
Unsupervised Debugging of DNNs, it automatically clusters error-inducing images
whose results are due to common subsets of DNN neurons. The intent is for the
generated clusters to group error-inducing images having common
characteristics, that is, having a common root cause. HUDD identifies root
causes by applying a clustering algorithm to matrices (i.e., heatmaps)
capturing the relevance of every DNN neuron on the DNN outcome. Also, HUDD
retrains DNNs with images that are automatically selected based on their
relatedness to the identified image clusters. Our empirical evaluation with
DNNs from the automotive domain have shown that HUDD automatically identifies
all the distinct root causes of DNN errors, thus supporting safety analysis.
Also, our retraining approach has shown to be more effective at improving DNN
accuracy than existing approaches. A demo video of HUDD is available at
https://youtu.be/drjVakP7jdU.
Related papers
- DelBugV: Delta-Debugging Neural Network Verifiers [0.0]
Deep neural networks (DNNs) are becoming a key component in diverse systems across the board.
Despite their success, they often err miserably; and this has triggered significant interest in formally verifying them.
Here, we present a novel tool, named DelBugV, that uses automated delta debug techniques on DNN verifiers.
arXiv Detail & Related papers (2023-05-29T18:42:03Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Fully Spiking Variational Autoencoder [66.58310094608002]
Spiking neural networks (SNNs) can be run on neuromorphic devices with ultra-high speed and ultra-low energy consumption.
In this study, we build a variational autoencoder (VAE) with SNN to enable image generation.
arXiv Detail & Related papers (2021-09-26T06:10:14Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - SyReNN: A Tool for Analyzing Deep Neural Networks [8.55884254206878]
Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains.
This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation.
arXiv Detail & Related papers (2021-01-09T00:27:23Z) - Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged
Fraudsters [78.53851936180348]
We introduce two types of camouflages based on recent empirical studies, i.e., the feature camouflage and the relation camouflage.
Existing GNNs have not addressed these two camouflages, which results in their poor performance in fraud detection problems.
We propose a new model named CAmouflage-REsistant GNN (CARE-GNN) to enhance the GNN aggregation process with three unique modules against camouflages.
arXiv Detail & Related papers (2020-08-19T22:33:12Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Bayesian x-vector: Bayesian Neural Network based x-vector System for
Speaker Verification [71.45033077934723]
We incorporate Bayesian neural networks (BNNs) into the deep neural network (DNN) x-vector speaker verification system.
With the weight uncertainty modeling provided by BNNs, we expect the system could generalize better on the evaluation data.
Results show that the system could benefit from BNNs by a relative EER decrease of 2.66% and 2.32% respectively for short- and long-utterance in-domain evaluations.
arXiv Detail & Related papers (2020-04-08T14:35:12Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z) - Supporting DNN Safety Analysis and Retraining through Heatmap-based
Unsupervised Learning [1.6414392145248926]
We propose HUDD, an approach that automatically supports the identification of root causes for DNN errors.
HUDD identifies root causes by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the outcome.
Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters.
arXiv Detail & Related papers (2020-02-03T16:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.