Time Series Forecasting via Learning Convolutionally Low-Rank Models
- URL: http://arxiv.org/abs/2104.11510v1
- Date: Fri, 23 Apr 2021 09:53:28 GMT
- Title: Time Series Forecasting via Learning Convolutionally Low-Rank Models
- Authors: Guangcan Liu
- Abstract summary: Recently,citetliu:arxiv: 2019 studied the rather challenging problem of time series forecasting from the perspective of compressed sensing.
They proposed a no-learning method, named Convolution Nuclear Norm Minimization (CNNM), and proved that CNNM can exactly recover the future part of a series from its observed part.
This paper tries to approach the issues by integrating a learnable, orthonormal transformation into CNNM.
We prove that the resulted model, termed Learning-Based CNNM (LbCNNM), strictly succeeds in identifying the future part of a series
- Score: 18.61160269442917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently,~\citet{liu:arxiv:2019} studied the rather challenging problem of
time series forecasting from the perspective of compressed sensing. They
proposed a no-learning method, named Convolution Nuclear Norm Minimization
(CNNM), and proved that CNNM can exactly recover the future part of a series
from its observed part, provided that the series is convolutionally low-rank.
While impressive, the convolutional low-rankness condition may not be satisfied
whenever the series is far from being seasonal, and is in fact brittle to the
presence of trends and dynamics. This paper tries to approach the issues by
integrating a learnable, orthonormal transformation into CNNM, with the purpose
for converting the series of involute structures into regular signals of
convolutionally low-rank. We prove that the resulted model, termed
Learning-Based CNNM (LbCNNM), strictly succeeds in identifying the future part
of a series, as long as the transform of the series is convolutionally
low-rank. To learn proper transformations that may meet the required success
conditions, we devise an interpretable method based on Principal Component
Purist (PCP). Equipped with this learning method and some elaborate data
argumentation skills, LbCNNM not only can handle well the major components of
time series (including trends, seasonality and dynamics), but also can make use
of the forecasts provided by some other forecasting methods; this means LbCNNM
can be used as a general tool for model combination. Extensive experiments on
100,452 real-world time series from TSDL and M4 demonstrate the superior
performance of LbCNNM.
Related papers
- Blending Low and High-Level Semantics of Time Series for Better Masked Time Series Generation [0.8999666725996975]
We introduce a novel framework, termed NC-VQVAE, to integrate self-supervised learning into time series generation approaches.
Our experimental results demonstrate that NC-VQVAE results in a considerable improvement in the quality of synthetic samples.
arXiv Detail & Related papers (2024-08-29T15:20:17Z) - Unsupervised Pre-training with Language-Vision Prompts for Low-Data Instance Segmentation [105.23631749213729]
We propose a novel method for unsupervised pre-training in low-data regimes.
Inspired by the recently successful prompting technique, we introduce a new method, Unsupervised Pre-training with Language-Vision Prompts.
We show that our method can converge faster and perform better than CNN-based models in low-data regimes.
arXiv Detail & Related papers (2024-05-22T06:48:43Z) - Large Language Models Are Zero-Shot Time Series Forecasters [48.73953666153385]
By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text.
We find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-shot extrapolate time series at a level comparable to or exceeding the performance of purpose-built time series models trained on the downstream tasks.
arXiv Detail & Related papers (2023-10-11T19:01:28Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - A Novel Method Combines Moving Fronts, Data Decomposition and Deep
Learning to Forecast Intricate Time Series [0.0]
Indian Summer Monsoon Rainfall (ISMR) is a very complex time series.
Conventional one-time decomposition technique suffers from a leak of information from the future.
Moving Front (MF) method is proposed to prevent data leakage.
arXiv Detail & Related papers (2023-03-11T12:07:26Z) - Learning a Restricted Boltzmann Machine using biased Monte Carlo
sampling [0.6554326244334867]
We show that sampling the equilibrium distribution via Markov Chain Monte Carlo can be dramatically accelerated using biased sampling techniques.
We also show that this sampling technique can be exploited to improve the computation of the log-likelihood gradient during the training too.
arXiv Detail & Related papers (2022-06-02T21:29:01Z) - CoST: Contrastive Learning of Disentangled Seasonal-Trend
Representations for Time Series Forecasting [35.76867542099019]
We propose a new time series representation learning framework named CoST.
CoST applies contrastive learning methods to learn disentangled seasonal-trend representations.
Experiments on real-world datasets show that CoST consistently outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-03T13:17:38Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - MMCGAN: Generative Adversarial Network with Explicit Manifold Prior [78.58159882218378]
We propose to employ explicit manifold learning as prior to alleviate mode collapse and stabilize training of GAN.
Our experiments on both the toy data and real datasets show the effectiveness of MMCGAN in alleviating mode collapse, stabilizing training, and improving the quality of generated samples.
arXiv Detail & Related papers (2020-06-18T07:38:54Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.