Supporting Safety Analysis of Image-processing DNNs through
Clustering-based Approaches
- URL: http://arxiv.org/abs/2301.13506v3
- Date: Wed, 17 Jan 2024 15:45:54 GMT
- Title: Supporting Safety Analysis of Image-processing DNNs through
Clustering-based Approaches
- Authors: Mohammed Oualid Attaoui, Hazem Fahmy, Fabrizio Pastore and Lionel
Briand
- Abstract summary: The adoption of deep neural networks (DNNs) in safety-critical contexts is often prevented by the lack of effective means to explain their results.
In this paper, we report on an empirical evaluation of 99 different pipelines for root cause analysis of DNN failures.
- Score: 2.362412515574206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adoption of deep neural networks (DNNs) in safety-critical contexts is
often prevented by the lack of effective means to explain their results,
especially when they are erroneous. In our previous work, we proposed a
white-box approach (HUDD) and a black-box approach (SAFE) to automatically
characterize DNN failures. They both identify clusters of similar images from a
potentially large set of images leading to DNN failures. However, the analysis
pipelines for HUDD and SAFE were instantiated in specific ways according to
common practices, deferring the analysis of other pipelines to future work. In
this paper, we report on an empirical evaluation of 99 different pipelines for
root cause analysis of DNN failures. They combine transfer learning,
autoencoders, heatmaps of neuron relevance, dimensionality reduction
techniques, and different clustering algorithms. Our results show that the best
pipeline combines transfer learning, DBSCAN, and UMAP. It leads to clusters
almost exclusively capturing images of the same failure scenario, thus
facilitating root cause analysis. Further, it generates distinct clusters for
each root cause of failure, thus enabling engineers to detect all the unsafe
scenarios. Interestingly, these results hold even for failure scenarios that
are only observed in a small percentage of the failing images.
Related papers
- CausAdv: A Causal-based Framework for Detecting Adversarial Examples [0.0]
Convolutional neural networks (CNNs) are vulnerable to crafted adversarial perturbations in inputs.
These inputs appear almost indistinguishable from natural images, yet they are incorrectly classified by CNN architectures.
We propose CausAdv: a causal framework for detecting adversarial examples based on counterfactual reasoning.
arXiv Detail & Related papers (2024-10-29T22:57:48Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Understanding Adversarial Examples Through Deep Neural Network's
Response Surface and Uncertainty Regions [1.8047694351309205]
We study the root cause of DNN adversarial examples.
Existing attack algorithms can generate from a handful to a few hundred adversarial examples.
We show there are infinitely many adversarial images given one clean sample, all within a small neighborhood of the clean sample.
arXiv Detail & Related papers (2021-06-30T02:38:17Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Supporting DNN Safety Analysis and Retraining through Heatmap-based
Unsupervised Learning [1.6414392145248926]
We propose HUDD, an approach that automatically supports the identification of root causes for DNN errors.
HUDD identifies root causes by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the outcome.
Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters.
arXiv Detail & Related papers (2020-02-03T16:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.