ADATIME: A Benchmarking Suite for Domain Adaptation on Time Series Data
- URL: http://arxiv.org/abs/2203.08321v2
- Date: Fri, 5 May 2023 14:06:57 GMT
- Title: ADATIME: A Benchmarking Suite for Domain Adaptation on Time Series Data
- Authors: Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo,
Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Xiaoli Li
- Abstract summary: Unsupervised domain adaptation methods aim to generalize well on unlabeled test data that may have a different distribution from the training data.
Existing works on time series domain adaptation suffer from inconsistencies in evaluation schemes, datasets, and backbone neural network architectures.
We develop a benchmarking evaluation suite (AdaTime) to systematically and fairly evaluate different domain adaptation methods on time series data.
- Score: 20.34427953468868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation methods aim to generalize well on unlabeled
test data that may have a different (shifted) distribution from the training
data. Such methods are typically developed on image data, and their application
to time series data is less explored. Existing works on time series domain
adaptation suffer from inconsistencies in evaluation schemes, datasets, and
backbone neural network architectures. Moreover, labeled target data are often
used for model selection, which violates the fundamental assumption of
unsupervised domain adaptation. To address these issues, we develop a
benchmarking evaluation suite (AdaTime) to systematically and fairly evaluate
different domain adaptation methods on time series data. Specifically, we
standardize the backbone neural network architectures and benchmarking
datasets, while also exploring more realistic model selection approaches that
can work with no labeled data or just a few labeled samples. Our evaluation
includes adapting state-of-the-art visual domain adaptation methods to time
series data as well as the recent methods specifically developed for time
series data. We conduct extensive experiments to evaluate 11 state-of-the-art
methods on five representative datasets spanning 50 cross-domain scenarios. Our
results suggest that with careful selection of hyper-parameters, visual domain
adaptation methods are competitive with methods proposed for time series domain
adaptation. In addition, we find that hyper-parameters could be selected based
on realistic model selection approaches. Our work unveils practical insights
for applying domain adaptation methods on time series data and builds a solid
foundation for future works in the field. The code is available at
\href{https://github.com/emadeldeen24/AdaTime}{github.com/emadeldeen24/AdaTime}.
Related papers
- SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation [55.87169702896249]
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift.
We propose a framework to evaluate DA methods and present a fair evaluation of existing shallow algorithms, including reweighting, mapping, and subspace alignment.
Our benchmark highlights the importance of realistic validation and provides practical guidance for real-life applications.
arXiv Detail & Related papers (2024-07-16T12:52:29Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - NuwaTS: a Foundation Model Mending Every Incomplete Time Series [24.768755438620666]
We present textbfNuwaTS, a novel framework that repurposes Pre-trained Language Models for general time series imputation.
NuwaTS can be applied to impute missing data across any domain.
We show that NuwaTS generalizes to other time series tasks, such as forecasting.
arXiv Detail & Related papers (2024-05-24T07:59:02Z) - Deep Unsupervised Domain Adaptation for Time Series Classification: a
Benchmark [3.618615996077951]
Unsupervised Domain Adaptation (UDA) aims to harness labeled source data to train models for unlabeled target data.
This paper introduces a benchmark for evaluating UDA techniques for time series classification.
We provide seven new benchmark datasets covering various domain shifts and temporal dynamics.
arXiv Detail & Related papers (2023-12-15T15:03:55Z) - Temporal Treasure Hunt: Content-based Time Series Retrieval System for
Discovering Insights [34.1973242428317]
Time series data is ubiquitous across various domains such as finance, healthcare, and manufacturing.
The ability to perform Content-based Time Series Retrieval (CTSR) is crucial for identifying unknown time series examples.
We introduce a CTSR benchmark dataset that comprises time series data from a variety of domains.
arXiv Detail & Related papers (2023-11-05T04:12:13Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Toward a Foundation Model for Time Series Data [34.1973242428317]
A foundation model is a machine learning model trained on a large and diverse set of data.
We develop an effective time series foundation model by leveraging unlabeled samples from multiple domains.
arXiv Detail & Related papers (2023-10-05T21:44:50Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Few-Shot Forecasting of Time-Series with Heterogeneous Channels [4.635820333232681]
We develop a model composed of permutation-invariant deep set-blocks which incorporate a temporal embedding.
We show through experiments that our model provides a good generalization, outperforming baselines carried over from simpler scenarios.
arXiv Detail & Related papers (2022-04-07T14:02:15Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.