CODiT: Conformal Out-of-Distribution Detection in Time-Series Data
- URL: http://arxiv.org/abs/2207.11769v1
- Date: Sun, 24 Jul 2022 16:41:14 GMT
- Title: CODiT: Conformal Out-of-Distribution Detection in Time-Series Data
- Authors: Ramneet Kaur, Kaustubh Sridhar, Sangdon Park, Susmit Jha, Anirban Roy,
Oleg Sokolsky, Insup Lee
- Abstract summary: In many applications, the inputs to a machine learning model form a temporal sequence.
We propose using deviation from the in-distribution temporal equivariance as the non-conformity measure in conformal anomaly detection framework.
We illustrate the efficacy of CODiT by achieving state-of-the-art results on computer vision datasets in autonomous driving.
- Score: 11.565104282674973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models are prone to making incorrect predictions on inputs
that are far from the training distribution. This hinders their deployment in
safety-critical applications such as autonomous vehicles and healthcare. The
detection of a shift from the training distribution of individual datapoints
has gained attention. A number of techniques have been proposed for such
out-of-distribution (OOD) detection. But in many applications, the inputs to a
machine learning model form a temporal sequence. Existing techniques for OOD
detection in time-series data either do not exploit temporal relationships in
the sequence or do not provide any guarantees on detection. We propose using
deviation from the in-distribution temporal equivariance as the non-conformity
measure in conformal anomaly detection framework for OOD detection in
time-series data.Computing independent predictions from multiple conformal
detectors based on the proposed measure and combining these predictions by
Fisher's method leads to the proposed detector CODiT with guarantees on false
detection in time-series data. We illustrate the efficacy of CODiT by achieving
state-of-the-art results on computer vision datasets in autonomous driving. We
also show that CODiT can be used for OOD detection in non-vision datasets by
performing experiments on the physiological GAIT sensory dataset. Code, data,
and trained models are available at
https://github.com/kaustubhsridhar/time-series-OOD.
Related papers
- Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Using Semantic Information for Defining and Detecting OOD Inputs [3.9577682622066264]
Out-of-distribution (OOD) detection has received some attention recently.
We demonstrate that the current detectors inherit the biases in the training dataset.
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information.
We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets.
arXiv Detail & Related papers (2023-02-21T21:31:20Z) - PULL: Reactive Log Anomaly Detection Based On Iterative PU Learning [58.85063149619348]
We propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows.
Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets.
arXiv Detail & Related papers (2023-01-25T16:34:43Z) - DEGAN: Time Series Anomaly Detection using Generative Adversarial
Network Discriminators and Density Estimation [0.0]
We have proposed an unsupervised Generative Adversarial Network (GAN)-based anomaly detection framework, DEGAN.
It relies solely on normal time series data as input to train a well-configured discriminator (D) into a standalone anomaly predictor.
arXiv Detail & Related papers (2022-10-05T04:32:12Z) - Out-of-Distribution Detection with Hilbert-Schmidt Independence
Optimization [114.43504951058796]
Outlier detection tasks have been playing a critical role in AI safety.
Deep neural network classifiers usually tend to incorrectly classify out-of-distribution (OOD) inputs into in-distribution classes with high confidence.
We propose an alternative probabilistic paradigm that is both practically useful and theoretically viable for the OOD detection tasks.
arXiv Detail & Related papers (2022-09-26T15:59:55Z) - Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data [7.221206118679026]
We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
arXiv Detail & Related papers (2022-07-15T14:39:57Z) - TFDPM: Attack detection for cyber-physical systems with diffusion
probabilistic models [10.389972581904999]
We propose TFDPM, a general framework for attack detection tasks in CPSs.
It simultaneously extracts temporal pattern and feature pattern given the historical data.
The noise scheduling network increases the detection speed by three times.
arXiv Detail & Related papers (2021-12-20T13:13:29Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.