Improving the Transferability of Time Series Forecasting with
Decomposition Adaptation
- URL: http://arxiv.org/abs/2307.00066v1
- Date: Fri, 30 Jun 2023 18:12:22 GMT
- Title: Improving the Transferability of Time Series Forecasting with
Decomposition Adaptation
- Authors: Yan Gao, Yan Wang, Qiang Wang
- Abstract summary: In time series forecasting, it is difficult to obtain enough data, which limits the performance of neural forecasting models.
To alleviate the data scarcity limitation, we design Sequence Decomposition Adaptation Network (SeDAN)
SeDAN is a novel transfer architecture to improve forecasting performance on the target domain by aligning transferable knowledge from cross-domain datasets.
- Score: 14.09967794482993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to effective pattern mining and feature representation, neural
forecasting models based on deep learning have achieved great progress. The
premise of effective learning is to collect sufficient data. However, in time
series forecasting, it is difficult to obtain enough data, which limits the
performance of neural forecasting models. To alleviate the data scarcity
limitation, we design Sequence Decomposition Adaptation Network (SeDAN) which
is a novel transfer architecture to improve forecasting performance on the
target domain by aligning transferable knowledge from cross-domain datasets.
Rethinking the transferability of features in time series data, we propose
Implicit Contrastive Decomposition to decompose the original features into
components including seasonal and trend features, which are easier to transfer.
Then we design the corresponding adaptation methods for decomposed features in
different domains. Specifically, for seasonal features, we perform joint
distribution adaptation and for trend features, we design an Optimal Local
Adaptation. We conduct extensive experiments on five benchmark datasets for
multivariate time series forecasting. The results demonstrate the effectiveness
of our SeDAN. It can provide more efficient and stable knowledge transfer.
Related papers
- Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - Low-rank Adaptation for Spatio-Temporal Forecasting [13.595533573828734]
We present a novel low-rank adaptation framework as an off-the-shelf plugin for existing spatialtemporal prediction models, STLo-RA.
Our approach increases parameters and training time of the original models by less than 4%, still achieving consistent and sustained performance enhancement.
arXiv Detail & Related papers (2024-04-11T17:04:55Z) - Adapting to Length Shift: FlexiLength Network for Trajectory Prediction [53.637837706712794]
Trajectory prediction plays an important role in various applications, including autonomous driving, robotics, and scene understanding.
Existing approaches mainly focus on developing compact neural networks to increase prediction precision on public datasets, typically employing a standardized input duration.
We introduce a general and effective framework, the FlexiLength Network (FLN), to enhance the robustness of existing trajectory prediction against varying observation periods.
arXiv Detail & Related papers (2024-03-31T17:18:57Z) - Probing the Robustness of Time-series Forecasting Models with
CounterfacTS [1.823020744088554]
We present and publicly release CounterfacTS, a tool to probe the robustness of deep learning models in time-series forecasting tasks.
CounterfacTS has a user-friendly interface that allows the user to visualize, compare and quantify time series data and their forecasts.
arXiv Detail & Related papers (2024-03-06T07:34:47Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - Few-Shot Adaptation of Pre-Trained Networks for Domain Shift [17.123505029637055]
Deep networks are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data.
Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data to mitigate such performance degradation.
We propose a framework for few-shot domain adaptation to address the practical challenges of data-efficient adaptation.
arXiv Detail & Related papers (2022-05-30T16:49:59Z) - Temporal Convolution Domain Adaptation Learning for Crops Growth
Prediction [5.966652553573454]
We construct an innovative network architecture based on domain adaptation learning to predict crops growth curves with limited available crop data.
We are the first to use the temporal convolution filters as the backbone to construct a domain adaptation network architecture.
Results show that the proposed temporal convolution-based network architecture outperforms all benchmarks not only in accuracy but also in model size and convergence rate.
arXiv Detail & Related papers (2022-02-24T14:22:36Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Transfer learning to improve streamflow forecasts in data sparse regions [0.0]
We study the methodology behind Transfer Learning (TL) through fine-tuning and parameter transferring for better generalization performance of streamflow prediction in data-sparse regions.
We propose a standard recurrent neural network in the form of Long Short-Term Memory (LSTM) to fit on a sufficiently large source domain dataset.
We present a methodology to implement transfer learning approaches for hydrologic applications by separating the spatial and temporal components of the model and training the model to generalize.
arXiv Detail & Related papers (2021-12-06T14:52:53Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.