Cross-domain Time Series Forecasting with Attention Sharing
- URL: http://arxiv.org/abs/2102.06828v1
- Date: Sat, 13 Feb 2021 00:26:35 GMT
- Title: Cross-domain Time Series Forecasting with Attention Sharing
- Authors: Xiaoyong Jin, Youngsuk Park, Danielle Maddix, Bernie Wang, Xifeng Yan
- Abstract summary: We propose a novel domain adaptation framework,Domain Adaptation Forecaster (DAF), to cope with the issue of data scarcity.
In particular, we pro-pose an attention-based shared module with a do-main discriminator across domains as well as pri-vate modules for individual domains.
This allowsus to jointly train the source and target domains bygenerating domain-invariant latent features whileretraining domain-specific features.
- Score: 10.180248006928107
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent years have witnessed deep neural net-works gaining increasing
popularity in the field oftime series forecasting. A primary reason of
theirsuccess is their ability to effectively capture com-plex temporal dynamics
across multiple relatedtime series. However, the advantages of thesedeep
forecasters only start to emerge in the pres-ence of a sufficient amount of
data. This poses achallenge for typical forecasting problems in prac-tice,
where one either has a small number of timeseries, or limited observations per
time series, orboth. To cope with the issue of data scarcity, wepropose a novel
domain adaptation framework,Domain Adaptation Forecaster (DAF), that lever-ages
the statistical strengths from another relevantdomain with abundant data
samples (source) toimprove the performance on the domain of inter-est with
limited data (target). In particular, we pro-pose an attention-based shared
module with a do-main discriminator across domains as well as pri-vate modules
for individual domains. This allowsus to jointly train the source and target
domains bygenerating domain-invariant latent features whileretraining
domain-specific features. Extensive ex-periments on various domains demonstrate
thatour proposed method outperforms state-of-the-artbaselines on synthetic and
real-world datasets.
Related papers
- Towards Generalisable Time Series Understanding Across Domains [10.350643783811174]
We introduce OTiS, an open model for general time series analysis.
We propose a novel pre-training paradigm including a tokeniser with learnable domain-specific signatures.
Our model is pre-trained on a large corpus of 640,187 samples and 11 billion time points spanning 8 distinct domains.
arXiv Detail & Related papers (2024-10-09T17:09:30Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Self-supervised Autoregressive Domain Adaptation for Time Series Data [9.75443057146649]
Unsupervised domain adaptation (UDA) has successfully addressed the domain shift problem for visual applications.
These approaches may have limited performance for time series data due to the following reasons.
We propose a Self-supervised Autoregressive Domain Adaptation (SLARDA) framework to address these limitations.
arXiv Detail & Related papers (2021-11-29T08:17:23Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.