Distribution-Aware Testing of Neural Networks Using Generative Models
- URL: http://arxiv.org/abs/2102.13602v1
- Date: Fri, 26 Feb 2021 17:18:21 GMT
- Title: Distribution-Aware Testing of Neural Networks Using Generative Models
- Authors: Swaroopa Dola, Matthew B. Dwyer, Mary Lou Soffa
- Abstract summary: The reliability of software that has a Deep Neural Network (DNN) as a component is urgently important.
We show that three recent testing techniques generate significant number of invalid test inputs.
We propose a technique to incorporate the valid input space of the DNN model under test in the test generation process.
- Score: 5.618419134365903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The reliability of software that has a Deep Neural Network (DNN) as a
component is urgently important today given the increasing number of critical
applications being deployed with DNNs. The need for reliability raises a need
for rigorous testing of the safety and trustworthiness of these systems. In the
last few years, there have been a number of research efforts focused on testing
DNNs. However the test generation techniques proposed so far lack a check to
determine whether the test inputs they are generating are valid, and thus
invalid inputs are produced. To illustrate this situation, we explored three
recent DNN testing techniques. Using deep generative model based input
validation, we show that all the three techniques generate significant number
of invalid test inputs. We further analyzed the test coverage achieved by the
test inputs generated by the DNN testing techniques and showed how invalid test
inputs can falsely inflate test coverage metrics.
To overcome the inclusion of invalid inputs in testing, we propose a
technique to incorporate the valid input space of the DNN model under test in
the test generation process. Our technique uses a deep generative model-based
algorithm to generate only valid inputs. Results of our empirical studies show
that our technique is effective in eliminating invalid tests and boosting the
number of valid test inputs generated.
Related papers
- Robust Black-box Testing of Deep Neural Networks using Co-Domain Coverage [18.355332126489756]
Rigorous testing of machine learning models is necessary for trustworthy deployments.
We present a novel black-box approach for generating test-suites for robust testing of deep neural networks (DNNs)
arXiv Detail & Related papers (2024-08-13T09:42:57Z) - Test Generation Strategies for Building Failure Models and Explaining
Spurious Failures [4.995172162560306]
Test inputs fail not only when the system under test is faulty but also when the inputs are invalid or unrealistic.
We propose to build failure models for inferring interpretable rules on test inputs that cause spurious failures.
We show that our proposed surrogate-assisted approach generates failure models with an average accuracy of 83%.
arXiv Detail & Related papers (2023-12-09T18:36:15Z) - GIST: Generated Inputs Sets Transferability in Deep Learning [12.147546375400749]
GIST (Generated Inputs Sets Transferability) is a novel approach for the efficient transfer of test sets.
This paper introduces GIST, a novel approach for the efficient transfer of test sets.
arXiv Detail & Related papers (2023-11-01T19:35:18Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - Generating and Detecting True Ambiguity: A Forgotten Danger in DNN
Supervision Testing [8.210473195536077]
We propose a novel way to generate ambiguous inputs to test Deep Neural Networks (DNNs)
In particular, we propose AmbiGuess to generate ambiguous samples for image classification problems.
We find that those best suited to detect true ambiguity perform worse on invalid, out-of-distribution and adversarial inputs and vice-versa.
arXiv Detail & Related papers (2022-07-21T14:21:34Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - SINVAD: Search-based Image Space Navigation for DNN Image Classifier
Test Input Generation [0.0]
Testing of Deep Neural Networks (DNNs) has become increasingly important as DNNs are widely adopted by safety critical systems.
Current testing techniques for DNNs depend on small local perturbations to existing inputs.
We propose new ways to search not over the entire image space, but rather over a plausible input space that resembles the true training distribution.
arXiv Detail & Related papers (2020-05-19T09:06:21Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.