Self-supervised Autoregressive Domain Adaptation for Time Series Data
- URL: http://arxiv.org/abs/2111.14834v1
- Date: Mon, 29 Nov 2021 08:17:23 GMT
- Title: Self-supervised Autoregressive Domain Adaptation for Time Series Data
- Authors: Mohamed Ragab, Emadeldeen Eldele, Zhenghua Chen, Min Wu, Chee-Keong
Kwoh, and Xiaoli Li
- Abstract summary: Unsupervised domain adaptation (UDA) has successfully addressed the domain shift problem for visual applications.
These approaches may have limited performance for time series data due to the following reasons.
We propose a Self-supervised Autoregressive Domain Adaptation (SLARDA) framework to address these limitations.
- Score: 9.75443057146649
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) has successfully addressed the domain
shift problem for visual applications. Yet, these approaches may have limited
performance for time series data due to the following reasons. First, they
mainly rely on large-scale dataset (i.e., ImageNet) for the source pretraining,
which is not applicable for time-series data. Second, they ignore the temporal
dimension on the feature space of the source and target domains during the
domain alignment step. Last, most of prior UDA methods can only align the
global features without considering the fine-grained class distribution of the
target domain. To address these limitations, we propose a Self-supervised
Autoregressive Domain Adaptation (SLARDA) framework. In particular, we first
design a self-supervised learning module that utilizes forecasting as an
auxiliary task to improve the transferability of the source features. Second,
we propose a novel autoregressive domain adaptation technique that incorporates
temporal dependency of both source and target features during domain alignment.
Finally, we develop an ensemble teacher model to align the class-wise
distribution in the target domain via a confident pseudo labeling approach.
Extensive experiments have been conducted on three real-world time series
applications with 30 cross-domain scenarios. Results demonstrate that our
proposed SLARDA method significantly outperforms the state-of-the-art
approaches for time series domain adaptation.
Related papers
- Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Adversarial Domain Adaptation with Self-Training for EEG-based Sleep
Stage Classification [13.986662296156013]
We propose a novel adversarial learning framework to tackle the domain shift problem in the unlabeled target domain.
First, we develop unshared attention mechanisms to preserve the domain-specific features in the source and target domains.
Second, we design a self-training strategy to align the fine-grained distributions class for the source and target domains via target domain pseudo labels.
arXiv Detail & Related papers (2021-07-09T14:56:12Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Cross-domain Time Series Forecasting with Attention Sharing [10.180248006928107]
We propose a novel domain adaptation framework,Domain Adaptation Forecaster (DAF), to cope with the issue of data scarcity.
In particular, we pro-pose an attention-based shared module with a do-main discriminator across domains as well as pri-vate modules for individual domains.
This allowsus to jointly train the source and target domains bygenerating domain-invariant latent features whileretraining domain-specific features.
arXiv Detail & Related papers (2021-02-13T00:26:35Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.