Partial Wasserstein and Maximum Mean Discrepancy distances for bridging
the gap between outlier detection and drift detection
- URL: http://arxiv.org/abs/2106.12893v1
- Date: Wed, 9 Jun 2021 18:49:55 GMT
- Title: Partial Wasserstein and Maximum Mean Discrepancy distances for bridging
the gap between outlier detection and drift detection
- Authors: Thomas Viehmann
- Abstract summary: An important aspect of monitoring is to check whether the inputs have strayed from the distribution they were validated for.
We bridge the gap between outlier detection and drift detection through comparing a given number of inputs to an automatically chosen part of the reference distribution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rise of machine learning and deep learning based applications in
practice, monitoring, i.e. verifying that these operate within specification,
has become an important practical problem. An important aspect of this
monitoring is to check whether the inputs (or intermediates) have strayed from
the distribution they were validated for, which can void the performance
assurances obtained during testing.
There are two common approaches for this. The, perhaps, more classical one is
outlier detection or novelty detection, where, for a single input we ask
whether it is an outlier, i.e. exceedingly unlikely to have originated from a
reference distribution. The second, perhaps more recent approach, is to
consider a larger number of inputs and compare its distribution to a reference
distribution (e.g. sampled during testing). This is done under the label drift
detection.
In this work, we bridge the gap between outlier detection and drift detection
through comparing a given number of inputs to an automatically chosen part of
the reference distribution.
Related papers
- Drift Detection: Introducing Gaussian Split Detector [1.9430846345184412]
We introduce Gaussian Split Detector (GSD) a novel drift detector that works in batch mode.
GSD is designed to work when the data follow a normal distribution and makes use of Gaussian mixture models to monitor changes in the decision boundary.
We show that our detector outperforms the state of the art in detecting real drift and in ignoring virtual drift which is key to avoid false alarms.
arXiv Detail & Related papers (2024-05-14T14:15:31Z) - CADM: Confusion Model-based Detection Method for Real-drift in Chunk
Data Stream [3.0885191226198785]
Concept drift detection has attracted considerable attention due to its importance in many real-world applications such as health monitoring and fault diagnosis.
We propose a new approach to detect real-drift in the chunk data stream with limited annotations based on concept confusion.
arXiv Detail & Related papers (2023-03-25T08:59:27Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - ReDFeat: Recoupling Detection and Description for Multimodal Feature
Learning [51.07496081296863]
We recouple independent constraints of detection and description of multimodal feature learning with a mutual weighting strategy.
We propose a detector that possesses a large receptive field and is equipped with learnable non-maximum suppression layers.
We build a benchmark that contains cross visible, infrared, near-infrared and synthetic aperture radar image pairs for evaluating the performance of features in feature matching and image registration tasks.
arXiv Detail & Related papers (2022-05-16T04:24:22Z) - Few-shot Forgery Detection via Guided Adversarial Interpolation [56.59499187594308]
Existing forgery detection methods suffer from significant performance drops when applied to unseen novel forgery approaches.
We propose Guided Adversarial Interpolation (GAI) to overcome the few-shot forgery detection problem.
Our method is validated to be robust to choices of majority and minority forgery approaches.
arXiv Detail & Related papers (2022-04-12T16:05:10Z) - Context-Aware Drift Detection [0.0]
Two-sample tests of homogeneity form the foundation upon which existing approaches to drift detection build.
We develop a more general drift detection framework built upon a foundation of two-sample tests for conditional distributional treatment effects.
arXiv Detail & Related papers (2022-03-16T14:23:02Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z) - Task-Sensitive Concept Drift Detector with Metric Learning [7.706795195017394]
We propose a novel task-sensitive drift detection framework, which is able to detect drifts without access to true labels during inference.
It is able to detect real drift, where the drift affects the classification performance, while it properly ignores virtual drift.
We evaluate the performance of the proposed framework with a novel metric, which accumulates the standard metrics of detection accuracy, false positive rate and detection delay into one value.
arXiv Detail & Related papers (2021-08-16T09:10:52Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Do We Really Need to Learn Representations from In-domain Data for
Outlier Detection? [6.445605125467574]
Methods based on the two-stage framework achieve state-of-the-art performance on this task.
We explore the possibility of avoiding the high cost of training a distinct representation for each outlier detection task.
In experiments, we demonstrate competitive or better performance on a variety of outlier detection benchmarks compared with previous two-stage methods.
arXiv Detail & Related papers (2021-05-19T17:30:28Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.