Fine-grained Forecasting Models Via Gaussian Process Blurring Effect
- URL: http://arxiv.org/abs/2312.14280v1
- Date: Thu, 21 Dec 2023 20:25:16 GMT
- Title: Fine-grained Forecasting Models Via Gaussian Process Blurring Effect
- Authors: Sepideh Koohfar and Laura Dietz
- Abstract summary: Time series forecasting is a challenging task due to the existence of complex and dynamic temporal dependencies.
Using more training data is one way to improve the accuracy, but this source is often limited.
We are building on successful denoising approaches for image generation by advocating for an end-to-end forecasting and denoising paradigm.
- Score: 6.472434306724611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series forecasting is a challenging task due to the existence of complex
and dynamic temporal dependencies. This can lead to incorrect predictions by
even the best forecasting models. Using more training data is one way to
improve the accuracy, but this source is often limited. In contrast, we are
building on successful denoising approaches for image generation by advocating
for an end-to-end forecasting and denoising paradigm.
We propose an end-to-end forecast-blur-denoise forecasting framework by
encouraging a division of labors between the forecasting and the denoising
models. The initial forecasting model is directed to focus on accurately
predicting the coarse-grained behavior, while the denoiser model focuses on
capturing the fine-grained behavior that is locally blurred by integrating a
Gaussian Process model. All three parts are interacting for the best end-to-end
performance. Our extensive experiments demonstrate that our proposed approach
is able to improve the forecasting accuracy of several state-of-the-art
forecasting models as well as several other denoising approaches.
Related papers
- On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - PostCast: Generalizable Postprocessing for Precipitation Nowcasting via Unsupervised Blurriness Modeling [85.56969895866243]
We propose an unsupervised postprocessing method to eliminate the blurriness without the requirement of training with the pairs of blurry predictions and corresponding ground truth.
A zero-shot blur kernel estimation mechanism and an auto-scale denoise guidance strategy are introduced to adapt the unconditional correlations to any blurriness modes.
arXiv Detail & Related papers (2024-10-08T08:38:23Z) - Continuous Ensemble Weather Forecasting with Diffusion models [10.730406954385927]
Continuous Ensemble Forecasting is a novel and flexible method for sampling ensemble forecasts in diffusion models.
It can generate temporally consistent ensemble trajectories completely in parallel, with no autoregressive steps.
We demonstrate that the method achieves competitive results for global weather forecasting with good probabilistic properties.
arXiv Detail & Related papers (2024-10-07T18:51:23Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce ExBooster, which captures the uncertainty in prediction outcomes by employing multiple random samples.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - Counterfactual Explanations for Time Series Forecasting [14.03870816983583]
We formulate the novel problem of counterfactual generation for time series forecasting, and propose an algorithm, called ForecastCF.
ForecastCF solves the problem by applying gradient-based perturbations to the original time series.
Our results show that ForecastCF outperforms the baseline in terms of counterfactual validity and data manifold closeness.
arXiv Detail & Related papers (2023-10-12T08:51:59Z) - A positive feedback method based on F-measure value for Salient Object
Detection [1.9249287163937976]
This paper proposes a positive feedback method based on F-measure value for salient object detection (SOD)
Our proposed method takes an image to be detected and inputs it into several existing models to obtain their respective prediction maps.
Experimental results on five publicly available datasets show that our proposed positive feedback method outperforms the latest 12 methods in five evaluation metrics for saliency map prediction.
arXiv Detail & Related papers (2023-04-28T04:05:13Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Learning Prediction Intervals for Model Performance [1.433758865948252]
We propose a method to compute prediction intervals for model performance.
We evaluate our approach across a wide range of drift conditions and show substantial improvement over competitive baselines.
arXiv Detail & Related papers (2020-12-15T21:32:03Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Deep Learning for Post-Processing Ensemble Weather Forecasts [14.622977874836298]
We propose a mixed model that uses only a subset of the original weather trajectories combined with a post-processing step using deep neural networks.
We show that our post-processing can use fewer trajectories to achieve comparable results to the full ensemble.
arXiv Detail & Related papers (2020-05-18T14:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.