A hybrid ensemble method with negative correlation learning for
regression
- URL: http://arxiv.org/abs/2104.02317v5
- Date: Mon, 15 May 2023 09:25:27 GMT
- Title: A hybrid ensemble method with negative correlation learning for
regression
- Authors: Yun Bai, Ganglin Tian, Yanfei Kang, Suling Jia
- Abstract summary: This study automatically selects and weights sub-models from a heterogeneous model pool.
It solves an optimization problem using an interior-point filtering linear-search algorithm.
The value of this study lies in its ease of use and effectiveness, allowing the hybrid ensemble to embrace diversity and accuracy.
- Score: 2.8484009470171943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hybrid ensemble, an essential branch of ensembles, has flourished in the
regression field, with studies confirming diversity's importance. However,
previous ensembles consider diversity in the sub-model training stage, with
limited improvement compared to single models. In contrast, this study
automatically selects and weights sub-models from a heterogeneous model pool.
It solves an optimization problem using an interior-point filtering
linear-search algorithm. The objective function innovatively incorporates
negative correlation learning as a penalty term, with which a diverse model
subset can be selected. The best sub-models from each model class are selected
to build the NCL ensemble, which performance is better than the simple average
and other state-of-the-art weighting methods. It is also possible to improve
the NCL ensemble with a regularization term in the objective function. In
practice, it is difficult to conclude the optimal sub-model for a dataset prior
due to the model uncertainty. Regardless, our method would achieve comparable
accuracy as the potential optimal sub-models. In conclusion, the value of this
study lies in its ease of use and effectiveness, allowing the hybrid ensemble
to embrace diversity and accuracy.
Related papers
- EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models [70.60381055741391]
Image restoration challenges related to illposed problems, resulting in deviations between single model predictions and ground-truths.
Ensemble learning aims to address these deviations by combining the predictions of multiple base models.
We employ an expectation (EM)-based algorithm to estimate ensemble weights for prediction candidates.
Our algorithm is model-agnostic and training-free, allowing seamless integration and enhancement of various pre-trained image restoration models.
arXiv Detail & Related papers (2024-10-30T12:16:35Z) - Stabilizing black-box model selection with the inflated argmax [8.52745154080651]
This paper presents a new approach to stabilizing model selection that leverages a combination of bagging and an "inflated" argmax operation.
Our method selects a small collection of models that all fit the data, and it is stable in that, with high probability, the removal of any training point will result in a collection of selected models that overlaps with the original collection.
In both settings, the proposed method yields stable and compact collections of selected models, outperforming a variety of benchmarks.
arXiv Detail & Related papers (2024-10-23T20:39:07Z) - Lp-Norm Constrained One-Class Classifier Combination [18.27510863075184]
We consider the one-class classification problem by modelling the sparsity/uniformity of the ensemble.
We present an effective approach to solve formulated convex constrained problem efficiently.
arXiv Detail & Related papers (2023-12-25T16:32:34Z) - MILO: Model-Agnostic Subset Selection Framework for Efficient Model
Training and Tuning [68.12870241637636]
We propose MILO, a model-agnostic subset selection framework that decouples the subset selection from model training.
Our empirical results indicate that MILO can train models $3times - 10 times$ faster and tune hyperparameters $20times - 75 times$ faster than full-dataset training or tuning without performance.
arXiv Detail & Related papers (2023-01-30T20:59:30Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Optimally Weighted Ensembles of Regression Models: Exact Weight
Optimization and Applications [0.0]
We show that combining different regression models can yield better results than selecting a single ('best') regression model.
We outline an efficient method that obtains optimally weighted linear combination from a heterogeneous set of regression models.
arXiv Detail & Related papers (2022-06-22T09:11:14Z) - Split Modeling for High-Dimensional Logistic Regression [0.2676349883103404]
A novel method is proposed to an ensemble logistic classification model briefly compiled.
Our method learns how to exploit the bias-off resulting in excellent prediction accuracy.
An open-source software library implementing the proposed method is discussed.
arXiv Detail & Related papers (2021-02-17T05:57:26Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.