Evaluating State of the Art, Forecasting Ensembles- and Meta-learning
Strategies for Model Fusion
- URL: http://arxiv.org/abs/2203.03279v1
- Date: Mon, 7 Mar 2022 10:51:40 GMT
- Title: Evaluating State of the Art, Forecasting Ensembles- and Meta-learning
Strategies for Model Fusion
- Authors: Pieter Cawood, Terence van Zyl
- Abstract summary: This paper focuses on the utility of the Exponential-Smoothing-Recurrent Neural Network (ES-RNN) in the pool of base models for different ensembles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Techniques of hybridisation and ensemble learning are popular model fusion
techniques for improving the predictive power of forecasting methods. With
limited research that instigates combining these two promising approaches, this
paper focuses on the utility of the Exponential-Smoothing-Recurrent Neural
Network (ES-RNN) in the pool of base models for different ensembles. We compare
against some state of the art ensembling techniques and arithmetic model
averaging as a benchmark. We experiment with the M4 forecasting data set of
100,000 time-series, and the results show that the Feature-based Forecast Model
Averaging (FFORMA), on average, is the best technique for late data fusion with
the ES-RNN. However, considering the M4's Daily subset of data, stacking was
the only successful ensemble at dealing with the case where all base model
performances are similar. Our experimental results indicate that we attain
state of the art forecasting results compared to N-BEATS as a benchmark. We
conclude that model averaging is a more robust ensemble than model selection
and stacking strategies. Further, the results show that gradient boosting is
superior for implementing ensemble learning strategies.
Related papers
- Infinite forecast combinations based on Dirichlet process [9.326879672480413]
This paper introduces a deep learning ensemble forecasting model based on the Dirichlet process.
It offers substantial improvements in prediction accuracy and stability compared to a single benchmark model.
arXiv Detail & Related papers (2023-11-21T06:41:41Z) - Ensemble Modeling for Multimodal Visual Action Recognition [50.38638300332429]
We propose an ensemble modeling approach for multimodal action recognition.
We independently train individual modality models using a variant of focal loss tailored to handle the long-tailed distribution of the MECCANO [21] dataset.
arXiv Detail & Related papers (2023-08-10T08:43:20Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Late Meta-learning Fusion Using Representation Learning for Time Series
Forecasting [0.0]
This study presents a unified taxonomy encompassing these topic areas.
The study empirically evaluates several model fusion approaches and a novel combination of hybrid and feature stacking algorithms called Deep-learning FORecast Model Averaging (DeFORMA)
The proposed model, DeFORMA, can achieve state-of-the-art results in the M4 data set.
arXiv Detail & Related papers (2023-03-20T10:29:42Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Conceptually Diverse Base Model Selection for Meta-Learners in Concept
Drifting Data Streams [3.0938904602244355]
We present a novel approach for estimating the conceptual similarity of base models, which is calculated using the Principal Angles (PAs) between their underlying subspaces.
We evaluate these methods against thresholding using common ensemble pruning metrics, namely predictive performance and Mutual Information (MI) in the context of online Transfer Learning (TL)
Our results show that conceptual similarity thresholding has a reduced computational overhead, and yet yields comparable predictive performance to thresholding using predictive performance and MI.
arXiv Detail & Related papers (2021-11-29T13:18:53Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - Optimal Ensemble Construction for Multi-Study Prediction with
Applications to COVID-19 Excess Mortality Estimation [7.02598981483736]
Multi-study ensembling uses a two-stage strategy which fits study-specific models and estimates ensemble weights separately.
This approach ignores the ensemble properties at the model-fitting stage, potentially resulting in a loss of efficiency.
We show that when little data is available for a country before the onset of the pandemic, leveraging data from other countries can substantially improve prediction accuracy.
arXiv Detail & Related papers (2021-09-19T16:52:41Z) - An Accurate and Fully-Automated Ensemble Model for Weekly Time Series
Forecasting [9.617563440471928]
We propose a forecasting method in this domain, leveraging state-of-the-art forecasting techniques.
We consider different meta-learning architectures, algorithms, and base model pools.
Our proposed method consistently outperforms a set of benchmarks and state-of-the-art weekly forecasting models.
arXiv Detail & Related papers (2020-10-16T04:29:09Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.