WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series
- URL: http://arxiv.org/abs/2203.09978v2
- Date: Thu, 6 Apr 2023 14:21:18 GMT
- Title: WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series
- Authors: Jean-Christophe Gagnon-Audet, Kartik Ahuja, Mohammad-Javad
Darvishi-Bayazi, Pooneh Mousavi, Guillaume Dumas, Irina Rish
- Abstract summary: We present WOODS: eight challenging open-source time series benchmarks covering a diverse range of data modalities.
We revise the existing OOD generalization algorithms for time series tasks and evaluate them using our systematic framework.
Our experiments show a large room for improvement for empirical risk minimization and OOD generalization algorithms on our datasets.
- Score: 9.181035389003759
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models often fail to generalize well under distributional
shifts. Understanding and overcoming these failures have led to a research
field of Out-of-Distribution (OOD) generalization. Despite being extensively
studied for static computer vision tasks, OOD generalization has been
underexplored for time series tasks. To shine light on this gap, we present
WOODS: eight challenging open-source time series benchmarks covering a diverse
range of data modalities, such as videos, brain recordings, and sensor signals.
We revise the existing OOD generalization algorithms for time series tasks and
evaluate them using our systematic framework. Our experiments show a large room
for improvement for empirical risk minimization and OOD generalization
algorithms on our datasets, thus underscoring the new challenges posed by time
series tasks. Code and documentation are available at
https://woods-benchmarks.github.io .
Related papers
- Out-of-Distribution Generalization in Time Series: A Survey [19.968769520282123]
Time series frequently manifest distribution shifts, diverse latent features, and non-stationary learning dynamics.
These characteristics pose significant challenges for out-of-distribution (OOD) generalization.
We present the first comprehensive review of OOD generalization methodologies for time series.
arXiv Detail & Related papers (2025-03-18T03:35:29Z) - TS-OOD: Evaluating Time-Series Out-of-Distribution Detection and Prospective Directions for Progress [6.140648893673249]
Out-of-distribution (OOD) data is a fundamental challenge in the deployment of machine learning models.
This paper seeks to address this research gap by conducting a comprehensive analysis of modality-agnostic OOD detection algorithms.
Our results demonstrate that: 1) the majority of state-of-the-art OOD methods exhibit limited performance on time-series data, and 2) OOD methods based on deep feature modeling may offer greater advantages for time-series OOD detection.
arXiv Detail & Related papers (2025-02-21T19:40:22Z) - Bridging OOD Detection and Generalization: A Graph-Theoretic View [21.84304334604601]
We introduce a graph-theoretic framework to tackle both OOD generalization and detection problems.
By leveraging the graph formulation, data representations are obtained through the factorization of the graph's adjacency matrix.
Empirical results showcase competitive performance in comparison to existing methods.
arXiv Detail & Related papers (2024-09-26T18:35:51Z) - TSI-Bench: Benchmarking Time Series Imputation [52.27004336123575]
TSI-Bench is a comprehensive benchmark suite for time series imputation utilizing deep learning techniques.
The TSI-Bench pipeline standardizes experimental settings to enable fair evaluation of imputation algorithms.
TSI-Bench innovatively provides a systematic paradigm to tailor time series forecasting algorithms for imputation purposes.
arXiv Detail & Related papers (2024-06-18T16:07:33Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Wild-Tab: A Benchmark For Out-Of-Distribution Generalization In Tabular
Regression [4.532517021515834]
Out-of-Distribution (OOD) generalization is an ongoing challenge in deep learning.
We present Wild-Tab, a benchmark tailored for OOD generalization in tabular regression tasks.
The benchmark incorporates 3 industrial datasets sourced from fields like weather prediction and power consumption estimation.
We observe that many of these methods often struggle to maintain high-performance levels on unseen data, with OOD performance showing a marked drop compared to in-distribution performance.
arXiv Detail & Related papers (2023-12-04T10:27:38Z) - DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization [58.704753031608625]
Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
arXiv Detail & Related papers (2023-08-04T12:27:11Z) - Graph Structure and Feature Extrapolation for Out-of-Distribution Generalization [54.64375566326931]
Out-of-distribution (OOD) generalization deals with the prevalent learning scenario where test distribution shifts from training distribution.
We propose to achieve graph OOD generalization with the novel design of non-Euclidean-space linear extrapolation.
Our design tailors OOD samples for specific shifts without corrupting underlying causal mechanisms.
arXiv Detail & Related papers (2023-06-13T18:46:28Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - OOD-Probe: A Neural Interpretation of Out-of-Domain Generalization [18.129450295108423]
We propose a flexible framework that evaluates OOD systems with finer granularity using a probing module.
We find that representations always encode some information about the domain.
The high probing results correlate to the domain generalization performances, leading to further directions in developing OOD generalization systems.
arXiv Detail & Related papers (2022-08-25T21:58:01Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Conditional GAN for timeseries generation [0.0]
Time Series GAN (TSGAN) is proposed to model realistic time series data.
We evaluate TSGAN on 70 data sets from a benchmark time series database.
arXiv Detail & Related papers (2020-06-30T02:19:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.