Adaptive Sampling for Probabilistic Forecasting under Distribution Shift
- URL: http://arxiv.org/abs/2302.11870v1
- Date: Thu, 23 Feb 2023 09:16:54 GMT
- Title: Adaptive Sampling for Probabilistic Forecasting under Distribution Shift
- Authors: Luca Masserano and Syama Sundar Rangapuram and Shubham Kapoor and
Rajbir Singh Nirwan and Youngsuk Park and Michael Bohlke-Schneider
- Abstract summary: We present an adaptive sampling strategy that selects the part of the time series history that is relevant for forecasting.
We show with synthetic and real-world experiments that this method adapts to distribution shift and significantly reduces the forecasting error of the base model for three out of five datasets.
- Score: 9.769524837609174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The world is not static: This causes real-world time series to change over
time through external, and potentially disruptive, events such as macroeconomic
cycles or the COVID-19 pandemic. We present an adaptive sampling strategy that
selects the part of the time series history that is relevant for forecasting.
We achieve this by learning a discrete distribution over relevant time steps by
Bayesian optimization. We instantiate this idea with a two-step method that is
pre-trained with uniform sampling and then training a lightweight adaptive
architecture with adaptive sampling. We show with synthetic and real-world
experiments that this method adapts to distribution shift and significantly
reduces the forecasting error of the base model for three out of five datasets.
Related papers
- OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Infinite forecast combinations based on Dirichlet process [9.326879672480413]
This paper introduces a deep learning ensemble forecasting model based on the Dirichlet process.
It offers substantial improvements in prediction accuracy and stability compared to a single benchmark model.
arXiv Detail & Related papers (2023-11-21T06:41:41Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Unsupervised Sampling Promoting for Stochastic Human Trajectory
Prediction [10.717921532244613]
We propose a novel method, called BOsampler, to adaptively mine potential paths with Bayesian optimization in an unsupervised manner.
Specifically, we model the trajectory sampling as a Gaussian process and construct an acquisition function to measure the potential sampling value.
This acquisition function applies the original distribution as prior and encourages exploring paths in the long-tail region.
arXiv Detail & Related papers (2023-04-09T19:15:14Z) - Energy-Based Test Sample Adaptation for Domain Generalization [81.04943285281072]
We propose energy-based sample adaptation at test time for domain.
To adapt target samples to source distributions, we iteratively update the samples by energy minimization.
Experiments on six benchmarks for classification of images and microblog threads demonstrate the effectiveness of our proposal.
arXiv Detail & Related papers (2023-02-22T08:55:09Z) - Adaptive Conformal Inference Under Distribution Shift [0.0]
We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion.
Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method.
We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts.
arXiv Detail & Related papers (2021-06-01T01:37:32Z) - Variance Reduction in Training Forecasting Models with Subgroup Sampling [34.941630385114216]
We show a forecasting model with commonly used gradients (e.g. SGD) potentially suffers large variance and thus requires long time training.
To alleviate this issue, we propose sampling strategy named Subgroup Sampling.
We show SCott converges faster with respect to both gradient and wall clock objectives.
arXiv Detail & Related papers (2021-03-02T22:23:27Z) - Robust Sampling in Deep Learning [62.997667081978825]
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization.
We address this problem by a new regularization method based on distributional robust optimization.
During the training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization.
arXiv Detail & Related papers (2020-06-04T09:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.