Exposing Previously Undetectable Faults in Deep Neural Networks
- URL: http://arxiv.org/abs/2106.00576v1
- Date: Tue, 1 Jun 2021 15:37:30 GMT
- Title: Exposing Previously Undetectable Faults in Deep Neural Networks
- Authors: Isaac Dunn, Hadrien Pouget, Daniel Kroening and Tom Melham
- Abstract summary: We introduce a novel method to find faults in DNNs that other methods cannot.
By leveraging generative machine learning, we can generate fresh test inputs that vary in their high-level features.
We demonstrate that our approach is capable of detecting deliberately injected faults as well as new faults in state-of-the-art DNNs.
- Score: 11.20625929625154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing methods for testing DNNs solve the oracle problem by constraining
the raw features (e.g. image pixel values) to be within a small distance of a
dataset example for which the desired DNN output is known. But this limits the
kinds of faults these approaches are able to detect. In this paper, we
introduce a novel DNN testing method that is able to find faults in DNNs that
other methods cannot. The crux is that, by leveraging generative machine
learning, we can generate fresh test inputs that vary in their high-level
features (for images, these include object shape, location, texture, and
colour). We demonstrate that our approach is capable of detecting deliberately
injected faults as well as new faults in state-of-the-art DNNs, and that in
both cases, existing methods are unable to find these faults.
Related papers
- Improved Detection and Diagnosis of Faults in Deep Neural Networks Using Hierarchical and Explainable Classification [3.2623791881739033]
We present DEFault -- a novel technique to detect and diagnose faults in Deep Neural Networks (DNN) programs.
Our approach achieves 94% recall in detecting real-world faulty DNN programs and 63% recall in diagnosing the root causes of the faults, demonstrating 3.92% - 11.54% higher performance than that of state-of-the-art techniques.
arXiv Detail & Related papers (2025-01-22T00:55:09Z) - Data-driven Verification of DNNs for Object Recognition [0.20482269513546453]
The paper proposes a new testing approach for Deep Neural Networks (DNN) using gradient-free optimization to find perturbation chains that successfully falsify the tested DNN.
Applying it to an image segmentation task of detecting railway tracks in images, we demonstrate that the approach can successfully identify weaknesses of the tested DNN.
arXiv Detail & Related papers (2024-07-17T11:30:02Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - A Uniform Framework for Anomaly Detection in Deep Neural Networks [0.5099811144731619]
We consider three classes of anomaly inputs,.
(1) natural inputs from a different distribution than the DNN is trained for, known as Out-of-Distribution (OOD) samples,.
(2) crafted inputs generated from ID by attackers, often known as adversarial (AD) samples, and (3) noise (NS) samples generated from meaningless data.
We propose a framework that aims to detect all these anomalies for a pre-trained DNN.
arXiv Detail & Related papers (2021-10-06T22:42:30Z) - Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian
Approximation of Hidden Features [8.723426955657345]
We propose a novel abstraction method which abstracts a deep neural network and a dataset into a Bayesian network.
We make use of dimensionality reduction techniques to identify hidden features that have been learned by hidden layers of the DNN.
We can derive a runtime monitoring approach to detect in operational time rare inputs.
arXiv Detail & Related papers (2021-03-05T14:28:42Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z) - Computing the Testing Error without a Testing Set [33.068870286618655]
We derive an algorithm to estimate the performance gap between training and testing that does not require any testing dataset.
This allows us to compute the DNN's testing error on unseen samples, even when we do not have access to them.
arXiv Detail & Related papers (2020-05-01T15:35:50Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.