CoST: Contrastive Learning of Disentangled Seasonal-Trend
Representations for Time Series Forecasting
- URL: http://arxiv.org/abs/2202.01575v1
- Date: Thu, 3 Feb 2022 13:17:38 GMT
- Title: CoST: Contrastive Learning of Disentangled Seasonal-Trend
Representations for Time Series Forecasting
- Authors: Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, Steven Hoi
- Abstract summary: We propose a new time series representation learning framework named CoST.
CoST applies contrastive learning methods to learn disentangled seasonal-trend representations.
Experiments on real-world datasets show that CoST consistently outperforms the state-of-the-art methods.
- Score: 35.76867542099019
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has been actively studied for time series forecasting, and the
mainstream paradigm is based on the end-to-end training of neural network
architectures, ranging from classical LSTM/RNNs to more recent TCNs and
Transformers. Motivated by the recent success of representation learning in
computer vision and natural language processing, we argue that a more promising
paradigm for time series forecasting, is to first learn disentangled feature
representations, followed by a simple regression fine-tuning step -- we justify
such a paradigm from a causal perspective. Following this principle, we propose
a new time series representation learning framework for time series forecasting
named CoST, which applies contrastive learning methods to learn disentangled
seasonal-trend representations. CoST comprises both time domain and frequency
domain contrastive losses to learn discriminative trend and seasonal
representations, respectively. Extensive experiments on real-world datasets
show that CoST consistently outperforms the state-of-the-art methods by a
considerable margin, achieving a 21.3\% improvement in MSE on multivariate
benchmarks. It is also robust to various choices of backbone encoders, as well
as downstream regressors.
Related papers
- PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - Multi-Patch Prediction: Adapting LLMs for Time Series Representation
Learning [22.28251586213348]
aLLM4TS is an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning.
A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding.
arXiv Detail & Related papers (2024-02-07T13:51:26Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - CLeaRForecast: Contrastive Learning of High-Purity Representations for
Time Series Forecasting [2.5816901096123863]
Time series forecasting (TSF) holds significant importance in modern society, spanning numerous domains.
Previous representation learning-based TSF algorithms typically embrace a contrastive learning paradigm featuring segregated trend-periodicity representations.
We propose CLeaRForecast, a novel contrastive learning framework to learn high-purity time series representations with proposed sample, feature, and architecture purifying methods.
arXiv Detail & Related papers (2023-12-10T04:37:43Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - MPR-Net:Multi-Scale Pattern Reproduction Guided Universality Time Series
Interpretable Forecasting [13.790498420659636]
Time series forecasting has received wide interest from existing research due to its broad applications inherent challenging.
This paper proposes a forecasting model, MPR-Net. It first adaptively decomposes multi-scale historical series patterns using convolution operation, then constructs a pattern extension forecasting method based on the prior knowledge of pattern reproduction, and finally reconstructs future patterns into future series using deconvolution operation.
By leveraging the temporal dependencies present in the time series, MPR-Net not only achieves linear time complexity, but also makes the forecasting process interpretable.
arXiv Detail & Related papers (2023-07-13T13:16:01Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Two Steps Forward and One Behind: Rethinking Time Series Forecasting
with Deep Learning [7.967995669387532]
The Transformer is a highly successful deep learning model that has revolutionised the world of artificial neural networks.
We investigate the effectiveness of Transformer-based models applied to the domain of time series forecasting.
We propose a set of alternative models that are better performing and significantly less complex.
arXiv Detail & Related papers (2023-04-10T12:47:42Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.