Window-Based Distribution Shift Detection for Deep Neural Networks
- URL: http://arxiv.org/abs/2210.10897v3
- Date: Thu, 8 Jun 2023 14:47:19 GMT
- Title: Window-Based Distribution Shift Detection for Deep Neural Networks
- Authors: Guy Bar-Shalom, Yonatan Geifman, Ran El-Yaniv
- Abstract summary: We study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data.
Using selective prediction principles, we propose a distribution deviation detection method for DNNs.
Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower time.
- Score: 21.73028341299301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To deploy and operate deep neural models in production, the quality of their
predictions, which might be contaminated benignly or manipulated maliciously by
input distributional deviations, must be monitored and assessed. Specifically,
we study the case of monitoring the healthy operation of a deep neural network
(DNN) receiving a stream of data, with the aim of detecting input
distributional deviations over which the quality of the network's predictions
is potentially damaged. Using selective prediction principles, we propose a
distribution deviation detection method for DNNs. The proposed method is
derived from a tight coverage generalization bound computed over a sample of
instances drawn from the true underlying distribution. Based on this bound, our
detector continuously monitors the operation of the network out-of-sample over
a test window and fires off an alarm whenever a deviation is detected. Our
novel detection method performs on-par or better than the state-of-the-art,
while consuming substantially lower computation time (five orders of magnitude
reduction) and space complexities. Unlike previous methods, which require at
least linear dependence on the size of the source distribution for each
detection, rendering them inapplicable to ``Google-Scale'' datasets, our
approach eliminates this dependence, making it suitable for real-world
applications.
Related papers
- Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - iDECODe: In-distribution Equivariance for Conformal Out-of-distribution
Detection [24.518698391381204]
Machine learning methods such as deep neural networks (DNNs) often generate incorrect predictions with high confidence.
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results.
arXiv Detail & Related papers (2022-01-07T05:21:40Z) - Out-of-Distribution Example Detection in Deep Neural Networks using
Distance to Modelled Embedding [0.0]
We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time.
By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method.
arXiv Detail & Related papers (2021-08-24T12:28:04Z) - Out-of-Distribution Detection using Outlier Detection Methods [0.0]
Out-of-distribution detection (OOD) deals with anomalous input to neural networks.
We use outlier detection algorithms to detect anomalous input as reliable as specialized methods from the field of OOD.
No neural network adaptation is required; detection is based on the model's softmax score.
arXiv Detail & Related papers (2021-08-18T16:05:53Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Statistical Testing for Efficient Out of Distribution Detection in Deep
Neural Networks [26.0303701309125]
This paper frames the Out Of Distribution (OOD) detection problem in Deep Neural Networks as a statistical hypothesis testing problem.
We build on this framework to suggest a novel OOD procedure based on low-order statistics.
Our method achieves comparable or better than state-of-the-art results on well-accepted OOD benchmarks without retraining the network parameters.
arXiv Detail & Related papers (2021-02-25T16:14:47Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Cross-Validation and Uncertainty Determination for Randomized Neural
Networks with Applications to Mobile Sensors [0.0]
Extreme learning machines provide an attractive and efficient method for supervised learning under limited computing ressources and green machine learning.
Results are discussed about supervised learning with such networks and regression methods in terms of consistency and bounds for the generalization and prediction error.
arXiv Detail & Related papers (2021-01-06T12:28:06Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.