Contrastive Learning for Unsupervised Domain Adaptation of Time Series
- URL: http://arxiv.org/abs/2206.06243v1
- Date: Mon, 13 Jun 2022 15:23:31 GMT
- Title: Contrastive Learning for Unsupervised Domain Adaptation of Time Series
- Authors: Yilmazcan Ozyurt, Stefan Feuerriegel, Ce Zhang
- Abstract summary: Unsupervised domain adaptation (UDA) aims at learning a machine learning model using a labeled source domain that performs well on a similar yet different, unlabeled target domain.
We develop a novel framework for UDA of time series data, called CLUDA.
We show that our framework achieves state-of-the-art performance for time series UDA.
- Score: 29.211602179219316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) aims at learning a machine learning
model using a labeled source domain that performs well on a similar yet
different, unlabeled target domain. UDA is important in many applications such
as medicine, where it is used to adapt risk scores across different patient
cohorts. In this paper, we develop a novel framework for UDA of time series
data, called CLUDA. Specifically, we propose a contrastive learning framework
to learn domain-invariant semantics in multivariate time series, so that these
preserve label information for the prediction task. In our framework, we
further capture semantic variation between source and target domain via
nearest-neighbor contrastive learning. To the best of our knowledge, ours is
the first framework to learn domain-invariant semantic information for UDA of
time series data. We evaluate our framework using large-scale, real-world
datasets with medical time series (i.e., MIMIC-IV and AmsterdamUMCdb) to
demonstrate its effectiveness and show that it achieves state-of-the-art
performance for time series UDA.
Related papers
- Deep Unsupervised Domain Adaptation for Time Series Classification: a
Benchmark [3.618615996077951]
Unsupervised Domain Adaptation (UDA) aims to harness labeled source data to train models for unlabeled target data.
This paper introduces a benchmark for evaluating UDA techniques for time series classification.
We provide seven new benchmark datasets covering various domain shifts and temporal dynamics.
arXiv Detail & Related papers (2023-12-15T15:03:55Z) - Multi-scale Feature Alignment for Continual Learning of Unlabeled
Domains [3.9498537297431167]
generative feature-driven image replay in conjunction with a dual-purpose discriminator enables the generation of images with realistic features for replay.
We present detailed ablation experiments studying our proposed method components and demonstrate a possible use-case of our continual UDA method for an unsupervised patch-based segmentation task.
arXiv Detail & Related papers (2023-02-02T18:19:01Z) - Contrastive Domain Adaptation for Time-Series via Temporal Mixup [14.723714504015483]
We propose a novel lightweight contrastive domain adaptation framework called CoTMix for time-series data.
Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.
Our approach can significantly outperform all state-of-the-art UDA methods.
arXiv Detail & Related papers (2022-12-03T06:53:38Z) - Self-supervised Autoregressive Domain Adaptation for Time Series Data [9.75443057146649]
Unsupervised domain adaptation (UDA) has successfully addressed the domain shift problem for visual applications.
These approaches may have limited performance for time series data due to the following reasons.
We propose a Self-supervised Autoregressive Domain Adaptation (SLARDA) framework to address these limitations.
arXiv Detail & Related papers (2021-11-29T08:17:23Z) - Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation [91.30558794056056]
Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
arXiv Detail & Related papers (2021-10-08T13:20:09Z) - Variational Attention: Propagating Domain-Specific Knowledge for
Multi-Domain Learning in Crowd Counting [75.80116276369694]
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset.
We resort to the multi-domain joint learning and propose a simple but effective Domain-specific Knowledge Propagating Network (DKPNet)
It is mainly achieved by proposing the novel Variational Attention(VA) technique for explicitly modeling the attention distributions for different domains.
arXiv Detail & Related papers (2021-08-18T08:06:37Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Continual Unsupervised Domain Adaptation for Semantic Segmentation [14.160280479726921]
Unsupervised Domain Adaptation (UDA) for semantic segmentation has been favorably applied to real-world scenarios in which pixel-level labels are hard to be obtained.
We propose Continual UDA for semantic segmentation based on a newly designed Expanding Target-specific Memory (ETM) framework.
Our novel ETM framework contains Target-specific Memory (TM) for each target domain to alleviate catastrophic forgetting.
arXiv Detail & Related papers (2020-10-19T05:59:48Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.