Mutation-based Fault Localization of Deep Neural Networks
- URL: http://arxiv.org/abs/2309.05067v1
- Date: Sun, 10 Sep 2023 16:18:49 GMT
- Title: Mutation-based Fault Localization of Deep Neural Networks
- Authors: Ali Ghanbari, Deepak-George Thomas, Muhammad Arbab Arshad, Hridesh
Rajan
- Abstract summary: Deep neural networks (DNNs) are susceptible to bugs, just like other types of software systems.
This paper revisits mutation-based fault localization in the context of DNN models and proposes a novel technique.
Deepmufl detects 53/109 of the bugs by ranking the buggy layer in top-1 position.
- Score: 11.93979764176335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are susceptible to bugs, just like other types of
software systems. A significant uptick in using DNN, and its applications in
wide-ranging areas, including safety-critical systems, warrant extensive
research on software engineering tools for improving the reliability of
DNN-based systems. One such tool that has gained significant attention in the
recent years is DNN fault localization. This paper revisits mutation-based
fault localization in the context of DNN models and proposes a novel technique,
named deepmufl, applicable to a wide range of DNN models. We have implemented
deepmufl and have evaluated its effectiveness using 109 bugs obtained from
StackOverflow. Our results show that deepmufl detects 53/109 of the bugs by
ranking the buggy layer in top-1 position, outperforming state-of-the-art
static and dynamic DNN fault localization systems that are also designed to
target the class of bugs supported by deepmufl. Moreover, we observed that we
can halve the fault localization time for a pre-trained model using mutation
selection, yet losing only 7.55% of the bugs localized in top-1 position.
Related papers
- Repairing Deep Neural Networks Based on Behavior Imitation [5.1791561132409525]
We propose a behavior-imitation based repair framework for deep neural networks (DNNs)
BIRDNN corrects incorrect predictions of negative samples by imitating the closest expected behaviors of positive samples during the retraining repair procedure.
For the fine-tuning repair process, BIRDNN analyzes the behavior differences of neurons on positive and negative samples to identify the most responsible neurons for the erroneous behaviors.
arXiv Detail & Related papers (2023-05-05T08:33:28Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for
Emerging Memories-Based Deep Neural Networks [7.566423455230909]
Deep Neural Networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications.
This paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement Techniques to enhance the reliability of NVM-based DNNs.
arXiv Detail & Related papers (2023-02-08T03:39:11Z) - enpheeph: A Fault Injection Framework for Spiking and Compressed Deep
Neural Networks [10.757663798809144]
We present enpheeph, a Fault Injection Framework for Spiking and Compressed Deep Neural Networks (DNNs)
By injecting a random and increasing number of faults, we show that DNNs can show a reduction in accuracy with a fault rate as low as 7 x 10 (-7) faults per parameter, with an accuracy drop higher than 40%.
arXiv Detail & Related papers (2022-07-31T00:30:59Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Bayesian x-vector: Bayesian Neural Network based x-vector System for
Speaker Verification [71.45033077934723]
We incorporate Bayesian neural networks (BNNs) into the deep neural network (DNN) x-vector speaker verification system.
With the weight uncertainty modeling provided by BNNs, we expect the system could generalize better on the evaluation data.
Results show that the system could benefit from BNNs by a relative EER decrease of 2.66% and 2.32% respectively for short- and long-utterance in-domain evaluations.
arXiv Detail & Related papers (2020-04-08T14:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.