A Survey on Assessing the Generalization Envelope of Deep Neural
Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples
- URL: http://arxiv.org/abs/2008.09381v4
- Date: Mon, 6 Sep 2021 17:24:12 GMT
- Title: A Survey on Assessing the Generalization Envelope of Deep Neural
Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples
- Authors: Julia Lust and Alexandru Paul Condurache
- Abstract summary: Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications.
It is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent.
This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs.
- Score: 77.99182201815763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous
applications. However, it is difficult to tell beforehand if a DNN receiving an
input will deliver the correct output since their decision criteria are usually
nontransparent. A DNN delivers the correct output if the input is within the
area enclosed by its generalization envelope. In this case, the information
contained in the input sample is processed reasonably by the network. It is of
large practical importance to assess at inference time if a DNN generalizes
correctly. Currently, the approaches to achieve this goal are investigated in
different problem set-ups rather independently from one another, leading to
three main research and literature fields: predictive uncertainty,
out-of-distribution detection and adversarial example detection. This survey
connects the three fields within the larger framework of investigating the
generalization performance of machine learning methods and in particular DNNs.
We underline the common ground, point at the most promising approaches and give
a structured overview of the methods that provide at inference time means to
establish if the current input is within the generalization envelope of a DNN.
Related papers
- Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Verifying Generalization in Deep Learning [3.4948705785954917]
Deep neural networks (DNNs) are the workhorses of deep learning.
DNNs are notoriously prone to poor generalization, i.e., may prove inadequate on inputs not encountered during training.
We propose a novel, verification-driven methodology for identifying DNN-based decision rules that generalize well to new input domains.
arXiv Detail & Related papers (2023-02-11T17:08:15Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Bounding generalization error with input compression: An empirical study
with infinite-width networks [16.17600110257266]
Estimating the Generalization Error (GE) of Deep Neural Networks (DNNs) is an important task that often relies on availability of held-out data.
In search of a quantity relevant to GE, we investigate the Mutual Information (MI) between the input and final layer representations.
An existing input compression-based GE bound is used to link MI and GE.
arXiv Detail & Related papers (2022-07-19T17:05:02Z) - A Uniform Framework for Anomaly Detection in Deep Neural Networks [0.5099811144731619]
We consider three classes of anomaly inputs,.
(1) natural inputs from a different distribution than the DNN is trained for, known as Out-of-Distribution (OOD) samples,.
(2) crafted inputs generated from ID by attackers, often known as adversarial (AD) samples, and (3) noise (NS) samples generated from meaningless data.
We propose a framework that aims to detect all these anomalies for a pre-trained DNN.
arXiv Detail & Related papers (2021-10-06T22:42:30Z) - Generalizing Neural Networks by Reflecting Deviating Data in Production [15.498447555957773]
We present a runtime approach that mitigates DNN mis-predictions caused by unexpected runtime inputs to the DNN.
We use a distribution analyzer based on the distance metric learned by a Siamese network to identify "unseen" semantically-preserving inputs.
Our approach transforms those unexpected inputs into inputs from the training set that are identified as having similar semantics.
arXiv Detail & Related papers (2021-10-06T13:05:45Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.