Simple Techniques Work Surprisingly Well for Neural Network Test
Prioritization and Active Learning (Replicability Study)
- URL: http://arxiv.org/abs/2205.00664v1
- Date: Mon, 2 May 2022 05:47:34 GMT
- Title: Simple Techniques Work Surprisingly Well for Neural Network Test
Prioritization and Active Learning (Replicability Study)
- Authors: Michael Weiss and Paolo Tonella
- Abstract summary: Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important technique to handle the typically very large test datasets efficiently.
Feng et. al. propose DeepGini, a very fast and simple TIP, and show that it outperforms more elaborate techniques such as neuron- and surprise coverage.
- Score: 4.987581730476023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important
technique to handle the typically very large test datasets efficiently, saving
computation and labeling costs. This is particularly true for large-scale,
deployed systems, where inputs observed in production are recorded to serve as
potential test or training data for the next versions of the system. Feng et.
al. propose DeepGini, a very fast and simple TIP, and show that it outperforms
more elaborate techniques such as neuron- and surprise coverage. In a
large-scale study (4 case studies, 8 test datasets, 32'200 trained models) we
verify their findings. However, we also find that other comparable or even
simpler baselines from the field of uncertainty quantification, such as the
predicted softmax likelihood or the entropy of the predicted softmax
likelihoods perform equally well as DeepGini.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Rethinking Deep Learning: Propagating Information in Neural Networks without Backpropagation and Statistical Optimization [0.0]
This study discusses the information propagation capabilities and potential practical applications of NNs as neural system mimicking structures.
In this study, the NNs architecture comprises fully connected layers using step functions as activation functions, with 0-15 hidden layers, and no weight updates.
The accuracy is calculated by comparing the average output vectors of the training data for each label with the output vectors of the test data, based on vector similarity.
arXiv Detail & Related papers (2024-08-18T09:22:24Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Inferring Data Preconditions from Deep Learning Models for Trustworthy
Prediction in Deployment [25.527665632625627]
It is important to reason about the trustworthiness of the model's predictions with unseen data during deployment.
Existing methods for specifying and verifying traditional software are insufficient for this task.
We propose a novel technique that uses rules derived from neural network computations to infer data preconditions.
arXiv Detail & Related papers (2024-01-26T03:47:18Z) - Data Augmentations in Deep Weight Spaces [89.45272760013928]
We introduce a novel augmentation scheme based on the Mixup method.
We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate.
arXiv Detail & Related papers (2023-11-15T10:43:13Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - DeepTPI: Test Point Insertion with Deep Reinforcement Learning [6.357061090668433]
Test point insertion (TPI) is a widely used technique for testability enhancement.
We propose a novel TPI approach based on deep reinforcement learning (DRL), named DeepTPI.
We show that DeepTPI significantly improves test coverage compared to the commercial DFT tool.
arXiv Detail & Related papers (2022-06-07T14:13:42Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - EagerNet: Early Predictions of Neural Networks for Computationally
Efficient Intrusion Detection [2.223733768286313]
We propose a new architecture to detect network attacks with minimal resources.
The architecture is able to deal with either binary or multiclass classification problems and trades prediction speed for the accuracy of the network.
arXiv Detail & Related papers (2020-07-27T11:31:37Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.