An Overview of Structural Coverage Metrics for Testing Neural Networks
- URL: http://arxiv.org/abs/2208.03407v1
- Date: Fri, 5 Aug 2022 23:37:04 GMT
- Title: An Overview of Structural Coverage Metrics for Testing Neural Networks
- Authors: Muhammad Usman, Youcheng Sun, Divya Gopinath, Rishi Dange, Luca
Manolache, Corina S. Pasareanu
- Abstract summary: Deep neural network (DNN) models need to be thoroughly tested to ensure they can reliably perform well in different scenarios.
We provide an overview of structural coverage metrics for testing DNN models, including neuron coverage (NC), k-multisection neuron coverage (kMNC), top-k neuron coverage (TKNC), neuron boundary coverage (NBC), strong neuron activation coverage (SNAC) and modified condition/decision coverage (MC/DC)
We also provide a tool, DNNCov, which can measure the testing coverage for all these metrics.
- Score: 15.75167816958815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network (DNN) models, including those used in safety-critical
domains, need to be thoroughly tested to ensure that they can reliably perform
well in different scenarios. In this article, we provide an overview of
structural coverage metrics for testing DNN models, including neuron coverage
(NC), k-multisection neuron coverage (kMNC), top-k neuron coverage (TKNC),
neuron boundary coverage (NBC), strong neuron activation coverage (SNAC) and
modified condition/decision coverage (MC/DC). We evaluate the metrics on
realistic DNN models used for perception tasks (including LeNet-1, LeNet-4,
LeNet-5, and ResNet20) as well as on networks used in autonomy (TaxiNet). We
also provide a tool, DNNCov, which can measure the testing coverage for all
these metrics. DNNCov outputs an informative coverage report to enable
researchers and practitioners to assess the adequacy of DNN testing, compare
different coverage measures, and to more conveniently inspect the model's
internals during testing.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Explainable Cost-Sensitive Deep Neural Networks for Brain Tumor
Detection from Brain MRI Images considering Data Imbalance [0.0]
An automated pipeline is proposed, which encompasses five models: CNN, ResNet50, InceptionV3, EfficientNetB0 and NASNetMobile.
The performance of the proposed architecture is evaluated on a balanced dataset and found to yield an accuracy of 99.33% for fine-tuned InceptionV3 model.
To further optimize the training process, a cost-sensitive neural network approach has been proposed in order to work with imbalanced datasets.
arXiv Detail & Related papers (2023-08-01T15:35:06Z) - Model-Agnostic Reachability Analysis on Deep Neural Networks [25.54542656637704]
We develop a model-agnostic verification framework, called DeepAgn.
It can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both.
It does not require access to the network's internal structures, such as layers and parameters.
arXiv Detail & Related papers (2023-04-03T09:01:59Z) - Black-Box Testing of Deep Neural Networks through Test Case Diversity [1.4700751484033807]
We investigate black-box input diversity metrics as an alternative to white-box coverage criteria.
Our experiments show that relying on the diversity of image features embedded in test input sets is a more reliable indicator than coverage criteria.
arXiv Detail & Related papers (2021-12-20T20:12:53Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Explore the Knowledge contained in Network Weights to Obtain Sparse
Neural Networks [2.649890751459017]
This paper proposes a novel learning approach to obtain sparse fully connected layers in neural networks (NNs) automatically.
We design a switcher neural network (SNN) to optimize the structure of the task neural network (TNN)
arXiv Detail & Related papers (2021-03-26T11:29:40Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.