Infinite forecast combinations based on Dirichlet process
- URL: http://arxiv.org/abs/2311.12379v2
- Date: Fri, 24 Nov 2023 06:59:57 GMT
- Title: Infinite forecast combinations based on Dirichlet process
- Authors: Yinuo Ren and Feng Li and Yanfei Kang and Jue Wang
- Abstract summary: This paper introduces a deep learning ensemble forecasting model based on the Dirichlet process.
It offers substantial improvements in prediction accuracy and stability compared to a single benchmark model.
- Score: 9.326879672480413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forecast combination integrates information from various sources by
consolidating multiple forecast results from the target time series. Instead of
the need to select a single optimal forecasting model, this paper introduces a
deep learning ensemble forecasting model based on the Dirichlet process.
Initially, the learning rate is sampled with three basis distributions as
hyperparameters to convert the infinite mixture into a finite one. All
checkpoints are collected to establish a deep learning sub-model pool, and
weight adjustment and diversity strategies are developed during the combination
process. The main advantage of this method is its ability to generate the
required base learners through a single training process, utilizing the
decaying strategy to tackle the challenge posed by the stochastic nature of
gradient descent in determining the optimal learning rate. To ensure the
method's generalizability and competitiveness, this paper conducts an empirical
analysis using the weekly dataset from the M4 competition and explores
sensitivity to the number of models to be combined. The results demonstrate
that the ensemble model proposed offers substantial improvements in prediction
accuracy and stability compared to a single benchmark model.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Optimizing accuracy and diversity: a multi-task approach to forecast
combinations [0.0]
We present a multi-task optimization paradigm that focuses on solving both problems simultaneously.
It incorporates an additional learning and optimization task into the standard feature-based forecasting approach.
The proposed approach elicits the essential role of diversity in feature-based forecasting.
arXiv Detail & Related papers (2023-10-31T15:26:33Z) - Ensemble Modeling for Multimodal Visual Action Recognition [50.38638300332429]
We propose an ensemble modeling approach for multimodal action recognition.
We independently train individual modality models using a variant of focal loss tailored to handle the long-tailed distribution of the MECCANO [21] dataset.
arXiv Detail & Related papers (2023-08-10T08:43:20Z) - Model ensemble instead of prompt fusion: a sample-specific knowledge
transfer method for few-shot prompt tuning [85.55727213502402]
We focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks.
We propose Sample-specific Ensemble of Source Models (SESoM)
SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs.
arXiv Detail & Related papers (2022-10-23T01:33:16Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Evaluating State of the Art, Forecasting Ensembles- and Meta-learning
Strategies for Model Fusion [0.0]
This paper focuses on the utility of the Exponential-Smoothing-Recurrent Neural Network (ES-RNN) in the pool of base models for different ensembles.
arXiv Detail & Related papers (2022-03-07T10:51:40Z) - Ensembles of Randomized NNs for Pattern-based Time Series Forecasting [0.0]
We propose an ensemble forecasting approach based on randomized neural networks.
A pattern-based representation of time series makes the proposed approach suitable for forecasting time series with multiple seasonality.
Case studies conducted on four real-world forecasting problems verified the effectiveness and superior performance of the proposed ensemble forecasting approach.
arXiv Detail & Related papers (2021-07-08T20:13:50Z) - An Extended Multi-Model Regression Approach for Compressive Strength
Prediction and Optimization of a Concrete Mixture [0.0]
A model based evaluation of concrete compressive strength is of high value, both for the purpose of strength prediction and the mixture optimization.
We take a further step towards improving the accuracy of the prediction model via the weighted combination of multiple regression methods.
A proposed (GA)-based mixture optimization is proposed, building on the obtained multi-regression model.
arXiv Detail & Related papers (2021-06-13T16:10:32Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.