DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization
- URL: http://arxiv.org/abs/2308.02282v1
- Date: Fri, 4 Aug 2023 12:27:11 GMT
- Title: DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization
- Authors: Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xiangyang Ji, Qiang
Yang, Xing Xie
- Abstract summary: Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
- Score: 58.704753031608625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series remains one of the most challenging modalities in machine
learning research. The out-of-distribution (OOD) detection and generalization
on time series tend to suffer due to its non-stationary property, i.e., the
distribution changes over time. The dynamic distributions inside time series
pose great challenges to existing algorithms to identify invariant
distributions since they mainly focus on the scenario where the domain
information is given as prior knowledge. In this paper, we attempt to exploit
subdomains within a whole dataset to counteract issues induced by
non-stationary for generalized representation learning. We propose DIVERSIFY, a
general framework, for OOD detection and generalization on dynamic
distributions of time series. DIVERSIFY takes an iterative process: it first
obtains the "worst-case" latent distribution scenario via adversarial training,
then reduces the gap between these latent distributions. We implement DIVERSIFY
via combining existing OOD detection methods according to either extracted
features or outputs of models for detection while we also directly utilize
outputs for classification. In addition, theoretical insights illustrate that
DIVERSIFY is theoretically supported. Extensive experiments are conducted on
seven datasets with different OOD settings across gesture recognition, speech
commands recognition, wearable stress and affect detection, and sensor-based
human activity recognition. Qualitative and quantitative results demonstrate
that DIVERSIFY learns more generalized features and significantly outperforms
other baselines.
Related papers
- DPU: Dynamic Prototype Updating for Multimodal Out-of-Distribution Detection [10.834698906236405]
Out-of-distribution (OOD) detection is essential for ensuring the robustness of machine learning models.
Recent advances in multimodal models have demonstrated the potential of leveraging multiple modalities to enhance detection performance.
We propose Dynamic Prototype Updating (DPU), a novel plug-and-play framework for multimodal OOD detection.
arXiv Detail & Related papers (2024-11-12T22:43:16Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection [9.656342063882555]
We study five types of distribution shifts and evaluate the performance of recent OOD detection methods on each of them.
Our findings reveal that while these methods excel in detecting unknown classes, their performance is inconsistent when encountering other types of distribution shifts.
We present an ensemble approach that offers a more consistent and comprehensive solution for broad OOD detection.
arXiv Detail & Related papers (2023-08-22T14:52:44Z) - Anomaly Detection under Distribution Shift [24.094884041252044]
Anomaly detection (AD) is a crucial machine learning task that aims to learn patterns from a set of normal training samples to identify abnormal samples in test data.
Most existing AD studies assume that the training and test data are drawn from the same data distribution, but the test data can have large distribution shifts.
We introduce a novel robust AD approach to diverse distribution shifts by minimizing the distribution gap between in-distribution and OOD normal samples in both the training and inference stages.
arXiv Detail & Related papers (2023-03-24T07:39:08Z) - Generalized Representations Learning for Time Series Classification [28.230863650758447]
We argue that the temporal complexity attributes to the unknown latent distributions within time series classification.
We present experiments on gesture recognition, speech commands recognition, wearable stress and affect detection, and sensor-based human activity recognition.
arXiv Detail & Related papers (2022-09-15T03:36:31Z) - Causality-Based Multivariate Time Series Anomaly Detection [63.799474860969156]
We formulate the anomaly detection problem from a causal perspective and view anomalies as instances that do not follow the regular causal mechanism to generate the multivariate data.
We then propose a causality-based anomaly detection approach, which first learns the causal structure from data and then infers whether an instance is an anomaly relative to the local causal mechanism.
We evaluate our approach with both simulated and public datasets as well as a case study on real-world AIOps applications.
arXiv Detail & Related papers (2022-06-30T06:00:13Z) - Gated Domain Units for Multi-source Domain Generalization [14.643490853965385]
Distribution shift (DS) occurs when a dataset at test time differs from the dataset at training time.
We introduce a modular neural network layer consisting of Gated Domain Units (GDUs) that learn a representation for each latent elementary distribution.
During inference, a weighted ensemble of learning machines can be created by comparing new observations with the representations of each elementary distribution.
arXiv Detail & Related papers (2022-06-24T18:12:38Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.