AdaRNN: Adaptive Learning and Forecasting of Time Series
- URL: http://arxiv.org/abs/2108.04443v2
- Date: Wed, 11 Aug 2021 01:30:35 GMT
- Title: AdaRNN: Adaptive Learning and Forecasting of Time Series
- Authors: Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu,
Chongjun Wang
- Abstract summary: Time series has wide applications in the real world and is known to be difficult to forecast.
This paper proposes Adaptive RNNs (AdaRNN) to tackle the problem by building an adaptive model that generalizes well on unseen test data.
Experiments on human activity recognition, air quality prediction, and financial analysis show that AdaRNN outperforms the latest methods by a classification accuracy of 2.6% and significantly reduces the RMSE by 9.0%.
- Score: 39.63457842611036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series has wide applications in the real world and is known to be
difficult to forecast. Since its statistical properties change over time, its
distribution also changes temporally, which will cause severe distribution
shift problem to existing methods. However, it remains unexplored to model the
time series in the distribution perspective. In this paper, we term this as
Temporal Covariate Shift (TCS). This paper proposes Adaptive RNNs (AdaRNN) to
tackle the TCS problem by building an adaptive model that generalizes well on
the unseen test data. AdaRNN is sequentially composed of two novel algorithms.
First, we propose Temporal Distribution Characterization to better characterize
the distribution information in the TS. Second, we propose Temporal
Distribution Matching to reduce the distribution mismatch in TS to learn the
adaptive TS model. AdaRNN is a general framework with flexible distribution
distances integrated. Experiments on human activity recognition, air quality
prediction, and financial analysis show that AdaRNN outperforms the latest
methods by a classification accuracy of 2.6% and significantly reduces the RMSE
by 9.0%. We also show that the temporal distribution matching algorithm can be
extended in Transformer structure to boost its performance.
Related papers
- Distribution estimation and change-point detection for time series via
DNN-based GANs [0.0]
generative adversarial networks (GANs) have recently been applied to estimating the distribution of independent and identically distributed data.
In this paper, we use the blocking technique to demonstrate the effectiveness of GANs for estimating the distribution of stationary time series.
arXiv Detail & Related papers (2022-11-26T14:33:34Z) - Distributional Drift Adaptation with Temporal Conditional Variational Autoencoder for Multivariate Time Series Forecasting [41.206310481507565]
We propose a novel framework temporal conditional variational autoencoder (TCVAE) to model the dynamic distributional dependencies over time.
The TCVAE infers the dependencies as a temporal conditional distribution to leverage latent variables.
We show the TCVAE's superior robustness and effectiveness over the state-of-the-art MTS forecasting baselines.
arXiv Detail & Related papers (2022-09-01T10:06:22Z) - Conformal Inference for Online Prediction with Arbitrary Distribution
Shifts [1.2277343096128712]
We consider the problem of forming prediction sets in an online setting where the distribution generating the data is allowed to vary over time.
We develop a novel procedure with provably small regret over all local time intervals of a given width.
We test our techniques on two real-world datasets aimed at predicting stock market volatility and COVID-19 case counts.
arXiv Detail & Related papers (2022-08-17T16:51:12Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - ES-dRNN: A Hybrid Exponential Smoothing and Dilated Recurrent Neural
Network Model for Short-Term Load Forecasting [1.4502611532302039]
Short-term load forecasting (STLF) is challenging due to complex time series (TS)
This paper proposes a novel hybrid hierarchical deep learning model that deals with multiple seasonality.
It combines exponential smoothing (ES) and a recurrent neural network (RNN)
arXiv Detail & Related papers (2021-12-05T19:38:42Z) - Distribution Mismatch Correction for Improved Robustness in Deep Neural
Networks [86.42889611784855]
normalization methods increase the vulnerability with respect to noise and input corruptions.
We propose an unsupervised non-parametric distribution correction method that adapts the activation distribution of each layer.
In our experiments, we empirically show that the proposed method effectively reduces the impact of intense image corruptions.
arXiv Detail & Related papers (2021-10-05T11:36:25Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Learning to Match Distributions for Domain Adaptation [116.14838935146004]
This paper proposes Learning to Match (L2M) to automatically learn the cross-domain distribution matching.
L2M reduces the inductive bias by using a meta-network to learn the distribution matching loss in a data-driven way.
Experiments on public datasets substantiate the superiority of L2M over SOTA methods.
arXiv Detail & Related papers (2020-07-17T03:26:13Z) - Spatiotemporal Adaptive Neural Network for Long-term Forecasting of
Financial Time Series [0.2793095554369281]
We investigate whether deep neural networks (DNNs) can be used to forecast time series (TS) forecasts conjointly.
We make use of the dynamic factor graph (DFG) to build a multivariate autoregressive model.
With ACTM, it is possible to vary the autoregressive order of a TS model over time and model a larger set of probability distributions.
arXiv Detail & Related papers (2020-03-27T00:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.