Contrastive Domain Adaptation for Time-Series via Temporal Mixup
- URL: http://arxiv.org/abs/2212.01555v2
- Date: Thu, 27 Jul 2023 05:58:50 GMT
- Title: Contrastive Domain Adaptation for Time-Series via Temporal Mixup
- Authors: Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong
Kwoh and Xiaoli Li
- Abstract summary: We propose a novel lightweight contrastive domain adaptation framework called CoTMix for time-series data.
Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.
Our approach can significantly outperform all state-of-the-art UDA methods.
- Score: 14.723714504015483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Domain Adaptation (UDA) has emerged as a powerful solution for
the domain shift problem via transferring the knowledge from a labeled source
domain to a shifted unlabeled target domain. Despite the prevalence of UDA for
visual applications, it remains relatively less explored for time-series
applications. In this work, we propose a novel lightweight contrastive domain
adaptation framework called CoTMix for time-series data. Unlike existing
approaches that either use statistical distances or adversarial techniques, we
leverage contrastive learning solely to mitigate the distribution shift across
the different domains. Specifically, we propose a novel temporal mixup strategy
to generate two intermediate augmented views for the source and target domains.
Subsequently, we leverage contrastive learning to maximize the similarity
between each domain and its corresponding augmented view. The generated views
consider the temporal dynamics of time-series data during the adaptation
process while inheriting the semantics among the two domains. Hence, we
gradually push both domains towards a common intermediate space, mitigating the
distribution shift across them. Extensive experiments conducted on five
real-world time-series datasets show that our approach can significantly
outperform all state-of-the-art UDA methods. The implementation code of CoTMix
is available at
\href{https://github.com/emadeldeen24/CoTMix}{github.com/emadeldeen24/CoTMix}.
Related papers
- LogoRA: Local-Global Representation Alignment for Robust Time Series Classification [31.704294005809082]
Unsupervised domain adaptation (UDA) of time series aims to teach models to identify consistent patterns across various temporal scenarios.
Existing UDA methods struggle to adequately extract and align both global and local features in time series data.
We propose the Local-Global Representation Alignment framework (LogoRA), which employs a two-branch encoder, comprising a multi-scale convolutional branch and a patching transformer branch.
Our evaluations on four time-series datasets demonstrate that LogoRA outperforms strong baselines by up to $12.52%$, showcasing its superiority in time series UDA tasks.
arXiv Detail & Related papers (2024-09-12T13:59:03Z) - Unified Domain Adaptive Semantic Segmentation [96.74199626935294]
Unsupervised Adaptive Domain Semantic (UDA-SS) aims to transfer the supervision from a labeled source domain to an unlabeled target domain.
We propose a Quad-directional Mixup (QuadMix) method, characterized by tackling distinct point attributes and feature inconsistencies.
Our method outperforms the state-of-the-art works by large margins on four challenging UDA-SS benchmarks.
arXiv Detail & Related papers (2023-11-22T09:18:49Z) - GLAD: Global-Local View Alignment and Background Debiasing for
Unsupervised Video Domain Adaptation with Large Domain Gap [9.284539958686368]
We tackle the challenging problem of unsupervised video domain adaptation (UVDA) for action recognition.
We introduce a novel UVDA scenario, denoted as Kinetics->BABEL, with a more considerable domain gap in terms of both temporal dynamics and background shifts.
We empirically validate that the proposed method shows significant improvement over the existing methods on the Kinetics->BABEL dataset with a large domain gap.
arXiv Detail & Related papers (2023-11-21T09:27:30Z) - Long-Term Invariant Local Features via Implicit Cross-Domain
Correspondences [79.21515035128832]
We conduct a thorough analysis of the performance of current state-of-the-art feature extraction networks under various domain changes.
We propose a novel data-centric method, Implicit Cross-Domain Correspondences (iCDC)
iCDC represents the same environment with multiple Neural Radiance Fields, each fitting the scene under individual visual domains.
arXiv Detail & Related papers (2023-11-06T18:53:01Z) - Match-And-Deform: Time Series Domain Adaptation through Optimal
Transport and Temporal Alignment [10.89671409446191]
We introduce the Match-And-Deform (MAD) approach that aims at finding correspondences between the source and target time series.
When embedded into a deep neural network, MAD helps learning new representations of time series that both align the domains.
Empirical studies on benchmark datasets and remote sensing data demonstrate that MAD makes meaningful sample-to-sample pairing and time shift estimation.
arXiv Detail & Related papers (2023-08-24T09:57:11Z) - Context-aware Domain Adaptation for Time Series Anomaly Detection [69.3488037353497]
Time series anomaly detection is a challenging task with a wide range of real-world applications.
Recent efforts have been devoted to time series domain adaptation to leverage knowledge from similar domains.
We propose a framework that combines context sampling and anomaly detection into a joint learning procedure.
arXiv Detail & Related papers (2023-04-15T02:28:58Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Domain Adaptation for Time-Series Classification to Mitigate Covariate
Shift [3.071136270246468]
This paper proposes a novel supervised domain adaptation based on two steps.
First, we search for an optimal class-dependent transformation from the source to the target domain from a few samples.
Second, we use embedding similarity techniques to select the corresponding transformation at inference.
arXiv Detail & Related papers (2022-04-07T10:27:14Z) - Contrast and Mix: Temporal Contrastive Video Domain Adaptation with
Background Mixing [55.73722120043086]
We introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation.
First, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds.
Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains.
arXiv Detail & Related papers (2021-10-28T14:03:29Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.