Mutation-based Fault Localization of Deep Neural Networks
- URL: http://arxiv.org/abs/2309.05067v1
- Date: Sun, 10 Sep 2023 16:18:49 GMT
- Title: Mutation-based Fault Localization of Deep Neural Networks
- Authors: Ali Ghanbari, Deepak-George Thomas, Muhammad Arbab Arshad, Hridesh
Rajan
- Abstract summary: Deep neural networks (DNNs) are susceptible to bugs, just like other types of software systems.
This paper revisits mutation-based fault localization in the context of DNN models and proposes a novel technique.
Deepmufl detects 53/109 of the bugs by ranking the buggy layer in top-1 position.
- Score: 11.93979764176335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are susceptible to bugs, just like other types of
software systems. A significant uptick in using DNN, and its applications in
wide-ranging areas, including safety-critical systems, warrant extensive
research on software engineering tools for improving the reliability of
DNN-based systems. One such tool that has gained significant attention in the
recent years is DNN fault localization. This paper revisits mutation-based
fault localization in the context of DNN models and proposes a novel technique,
named deepmufl, applicable to a wide range of DNN models. We have implemented
deepmufl and have evaluated its effectiveness using 109 bugs obtained from
StackOverflow. Our results show that deepmufl detects 53/109 of the bugs by
ranking the buggy layer in top-1 position, outperforming state-of-the-art
static and dynamic DNN fault localization systems that are also designed to
target the class of bugs supported by deepmufl. Moreover, we observed that we
can halve the fault localization time for a pre-trained model using mutation
selection, yet losing only 7.55% of the bugs localized in top-1 position.
Related papers
- Improved Detection and Diagnosis of Faults in Deep Neural Networks Using Hierarchical and Explainable Classification [3.2623791881739033]
We present DEFault -- a novel technique to detect and diagnose faults in Deep Neural Networks (DNN) programs.
Our approach achieves 94% recall in detecting real-world faulty DNN programs and 63% recall in diagnosing the root causes of the faults, demonstrating 3.92% - 11.54% higher performance than that of state-of-the-art techniques.
arXiv Detail & Related papers (2025-01-22T00:55:09Z) - An Empirical Study of Fault Localisation Techniques for Deep Learning [17.586333091528594]
We evaluate and compare existing state-of-the-art fault localisation techniques.
dfd is the most effective tool, achieving an average recall of 0.61 and precision of 0.41 on our benchmark.
arXiv Detail & Related papers (2024-12-15T20:47:03Z) - BDefects4NN: A Backdoor Defect Database for Controlled Localization Studies in Neural Networks [65.666913051617]
We introduce BDefects4NN, the first backdoor defect database for localization studies.
BDefects4NN provides labeled backdoor-defected DNNs at the neuron granularity and enables controlled localization studies of defect root causes.
We conduct experiments on evaluating six fault localization criteria and two defect repair techniques, which show limited effectiveness for backdoor defects.
arXiv Detail & Related papers (2024-12-01T09:52:48Z) - CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for
Emerging Memories-Based Deep Neural Networks [7.566423455230909]
Deep Neural Networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications.
This paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement Techniques to enhance the reliability of NVM-based DNNs.
arXiv Detail & Related papers (2023-02-08T03:39:11Z) - enpheeph: A Fault Injection Framework for Spiking and Compressed Deep
Neural Networks [10.757663798809144]
We present enpheeph, a Fault Injection Framework for Spiking and Compressed Deep Neural Networks (DNNs)
By injecting a random and increasing number of faults, we show that DNNs can show a reduction in accuracy with a fault rate as low as 7 x 10 (-7) faults per parameter, with an accuracy drop higher than 40%.
arXiv Detail & Related papers (2022-07-31T00:30:59Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Bayesian x-vector: Bayesian Neural Network based x-vector System for
Speaker Verification [71.45033077934723]
We incorporate Bayesian neural networks (BNNs) into the deep neural network (DNN) x-vector speaker verification system.
With the weight uncertainty modeling provided by BNNs, we expect the system could generalize better on the evaluation data.
Results show that the system could benefit from BNNs by a relative EER decrease of 2.66% and 2.32% respectively for short- and long-utterance in-domain evaluations.
arXiv Detail & Related papers (2020-04-08T14:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.