Time Series Prediction under Distribution Shift using Differentiable
Forgetting
- URL: http://arxiv.org/abs/2207.11486v1
- Date: Sat, 23 Jul 2022 10:32:37 GMT
- Title: Time Series Prediction under Distribution Shift using Differentiable
Forgetting
- Authors: Stefanos Bennett, Jase Clarkson
- Abstract summary: We frame time series prediction under distribution shift as a weighted empirical risk minimisation problem.
The weighting of previous observations in the empirical risk is determined by a forgetting mechanism.
We propose a gradient-based learning method for the parameters of the forgetting mechanism.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series prediction is often complicated by distribution shift which
demands adaptive models to accommodate time-varying distributions. We frame
time series prediction under distribution shift as a weighted empirical risk
minimisation problem. The weighting of previous observations in the empirical
risk is determined by a forgetting mechanism which controls the trade-off
between the relevancy and effective sample size that is used for the estimation
of the predictive model. In contrast to previous work, we propose a
gradient-based learning method for the parameters of the forgetting mechanism.
This speeds up optimisation and therefore allows more expressive forgetting
mechanisms.
Related papers
- Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Causality-oriented robustness: exploiting general additive interventions [3.871660145364189]
In this paper, we focus on causality-oriented robustness and propose Distributional Robustness via Invariant Gradients (DRIG)
In a linear setting, we prove that DRIG yields predictions that are robust among a data-dependent class of distribution shifts.
We extend our approach to the semi-supervised domain adaptation setting to further improve prediction performance.
arXiv Detail & Related papers (2023-07-18T16:22:50Z) - Sinkhorn-Flow: Predicting Probability Mass Flow in Dynamical Systems
Using Optimal Transport [89.61692654941106]
We propose a new approach to predicting such mass flow over time using optimal transport.
We apply our approach to the task of predicting how communities will evolve over time in social network settings.
arXiv Detail & Related papers (2023-03-14T07:25:44Z) - Regularized Vector Quantization for Tokenized Image Synthesis [126.96880843754066]
Quantizing images into discrete representations has been a fundamental problem in unified generative modeling.
deterministic quantization suffers from severe codebook collapse and misalignment with inference stage while quantization suffers from low codebook utilization and reconstruction objective.
This paper presents a regularized vector quantization framework that allows to mitigate perturbed above issues effectively by applying regularization from two perspectives.
arXiv Detail & Related papers (2023-03-11T15:20:54Z) - Debiased Fine-Tuning for Vision-language Models by Prompt Regularization [50.41984119504716]
We present a new paradigm for fine-tuning large-scale vision pre-trained models on downstream task, dubbed Prompt Regularization (ProReg)
ProReg uses the prediction by prompting the pretrained model to regularize the fine-tuning.
We show the consistently strong performance of ProReg compared with conventional fine-tuning, zero-shot prompt, prompt tuning, and other state-of-the-art methods.
arXiv Detail & Related papers (2023-01-29T11:53:55Z) - Scalable Dynamic Mixture Model with Full Covariance for Probabilistic
Traffic Forecasting [16.04029885574568]
We propose a dynamic mixture of zero-mean Gaussian distributions for the time-varying error process.
The proposed method can be seamlessly integrated into existing deep-learning frameworks with only a few additional parameters to be learned.
We evaluate the proposed method on a traffic speed forecasting task and find that our method not only improves model horizons but also provides interpretabletemporal correlation structures.
arXiv Detail & Related papers (2022-12-10T22:50:00Z) - Distributional Drift Adaptation with Temporal Conditional Variational Autoencoder for Multivariate Time Series Forecasting [41.206310481507565]
We propose a novel framework temporal conditional variational autoencoder (TCVAE) to model the dynamic distributional dependencies over time.
The TCVAE infers the dependencies as a temporal conditional distribution to leverage latent variables.
We show the TCVAE's superior robustness and effectiveness over the state-of-the-art MTS forecasting baselines.
arXiv Detail & Related papers (2022-09-01T10:06:22Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Short-term prediction of Time Series based on bounding techniques [0.0]
This paper is reconsidered the prediction problem in time series framework by using a new non-parametric approach.
The innovation is to consider both deterministic and deterministic-stochastic assumptions in order to obtain the upper bound of the prediction error.
A benchmark is included to illustrate that the proposed predictor can obtain suitable results in a prediction scheme, and can be an interesting alternative method to the classical non-parametric methods.
arXiv Detail & Related papers (2021-01-26T11:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.