CLeaRForecast: Contrastive Learning of High-Purity Representations for
Time Series Forecasting
- URL: http://arxiv.org/abs/2312.05758v1
- Date: Sun, 10 Dec 2023 04:37:43 GMT
- Title: CLeaRForecast: Contrastive Learning of High-Purity Representations for
Time Series Forecasting
- Authors: Jiaxin Gao, Yuxiao Hu, Qinglong Cao, Siqi Dai, Yuntian Chen
- Abstract summary: Time series forecasting (TSF) holds significant importance in modern society, spanning numerous domains.
Previous representation learning-based TSF algorithms typically embrace a contrastive learning paradigm featuring segregated trend-periodicity representations.
We propose CLeaRForecast, a novel contrastive learning framework to learn high-purity time series representations with proposed sample, feature, and architecture purifying methods.
- Score: 2.5816901096123863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series forecasting (TSF) holds significant importance in modern society,
spanning numerous domains. Previous representation learning-based TSF
algorithms typically embrace a contrastive learning paradigm featuring
segregated trend-periodicity representations. Yet, these methodologies
disregard the inherent high-impact noise embedded within time series data,
resulting in representation inaccuracies and seriously demoting the forecasting
performance. To address this issue, we propose CLeaRForecast, a novel
contrastive learning framework to learn high-purity time series representations
with proposed sample, feature, and architecture purifying methods. More
specifically, to avoid more noise adding caused by the transformations of
original samples (series), transformations are respectively applied for trendy
and periodic parts to provide better positive samples with obviously less
noise. Moreover, we introduce a channel independent training manner to mitigate
noise originating from unrelated variables in the multivariate series. By
employing a streamlined deep-learning backbone and a comprehensive global
contrastive loss function, we prevent noise introduction due to redundant or
uneven learning of periodicity and trend. Experimental results show the
superior performance of CLeaRForecast in various downstream TSF tasks.
Related papers
- Frequency-Masked Embedding Inference: A Non-Contrastive Approach for Time Series Representation Learning [0.38366697175402226]
This paper introduces Frequency-masked Embedding Inference (FEI), a novel non-contrastive method that completely eliminates the need for positive and negative samples.
FEI significantly outperforms existing contrastive-based methods in terms of generalization.
This study provides new insights into self-supervised representation learning for time series.
arXiv Detail & Related papers (2024-12-30T08:12:17Z) - MuSiCNet: A Gradual Coarse-to-Fine Framework for Irregularly Sampled Multivariate Time Series Analysis [45.34420094525063]
We introduce a novel perspective that irregularity is essentially relative in some senses.
MuSiCNet is an ISMTS analysis framework that competitive with SOTA in three mainstream tasks consistently.
arXiv Detail & Related papers (2024-12-02T02:50:01Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Explaining Time Series via Contrastive and Locally Sparse Perturbations [45.055327583283315]
ContraLSP is a sparse model that introduces counterfactual samples to build uninformative perturbations but keeps distribution using contrastive learning.
Empirical studies on both synthetic and real-world datasets show that ContraLSP outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-01-16T18:27:37Z) - U-Mixer: An Unet-Mixer Architecture with Stationarity Correction for
Time Series Forecasting [11.55346291812749]
Non-stationarity in time series forecasting obstructs stable feature propagation through deep layers, disrupts feature distributions, and complicates learning data distribution changes.
We propose U-Mixer, which captures local temporal dependencies between different patches and channels separately.
We show that U-Mixer achieves 14.5% and 7.7% improvements over state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2024-01-04T12:41:40Z) - One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion
Schedule Flaws and Enhancing Low-Frequency Controls [77.42510898755037]
One More Step (OMS) is a compact network that incorporates an additional simple yet effective step during inference.
OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters.
Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module.
arXiv Detail & Related papers (2023-11-27T12:02:42Z) - State Sequences Prediction via Fourier Transform for Representation
Learning [111.82376793413746]
We propose State Sequences Prediction via Fourier Transform (SPF), a novel method for learning expressive representations efficiently.
We theoretically analyze the existence of structural information in state sequences, which is closely related to policy performance and signal regularity.
Experiments demonstrate that the proposed method outperforms several state-of-the-art algorithms in terms of both sample efficiency and performance.
arXiv Detail & Related papers (2023-10-24T14:47:02Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - CoST: Contrastive Learning of Disentangled Seasonal-Trend
Representations for Time Series Forecasting [35.76867542099019]
We propose a new time series representation learning framework named CoST.
CoST applies contrastive learning methods to learn disentangled seasonal-trend representations.
Experiments on real-world datasets show that CoST consistently outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-03T13:17:38Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.