Model Agnostic Combination for Ensemble Learning
- URL: http://arxiv.org/abs/2006.09025v1
- Date: Tue, 16 Jun 2020 09:44:58 GMT
- Title: Model Agnostic Combination for Ensemble Learning
- Authors: Ohad Silbert, Yitzhak Peleg and Evi Kopelowitz
- Abstract summary: We present a novel ensembling technique coined MAC that is designed to find the optimal function for combining models.
Being agnostic to the number of sub-models enables addition and replacement of sub-models to the combination even after deployment.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensemble of models is well known to improve single model performance. We
present a novel ensembling technique coined MAC that is designed to find the
optimal function for combining models while remaining invariant to the number
of sub-models involved in the combination. Being agnostic to the number of
sub-models enables addition and replacement of sub-models to the combination
even after deployment, unlike many of the current methods for ensembling such
as stacking, boosting, mixture of experts and super learners that lock the
models used for combination during training and therefore need retraining
whenever a new model is introduced into the ensemble. We show that on the
Kaggle RSNA Intracranial Hemorrhage Detection challenge, MAC outperforms
classical average methods, demonstrates competitive results to boosting via
XGBoost for a fixed number of sub-models, and outperforms it when adding
sub-models to the combination without retraining.
Related papers
- EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Training-Free Pretrained Model Merging [38.16269074353077]
We propose an innovative model merging framework, coined as merging under dual-space constraints (MuDSC)
In order to enhance usability, we have also incorporated adaptations for group structure, including Multi-Head Attention and Group Normalization.
arXiv Detail & Related papers (2024-03-04T06:19:27Z) - Class-Incremental Mixture of Gaussians for Deep Continual Learning [15.49323098362628]
We propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework.
We show that our model can effectively learn in memory-free scenarios with fixed extractors.
arXiv Detail & Related papers (2023-07-09T04:33:19Z) - Sequential Ensembling for Semantic Segmentation [4.030520171276982]
We benchmark the popular ensembling approach of combining predictions of multiple, independently-trained, state-of-the-art models.
We propose a novel method inspired by boosting to sequentially ensemble networks that significantly outperforms the naive ensemble baseline.
arXiv Detail & Related papers (2022-10-08T22:13:59Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [55.28436972267793]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - XEM: An Explainable-by-Design Ensemble Method for Multivariate Time
Series Classification [61.33695273474151]
We present XEM, an eXplainable-by-design Ensemble method for Multivariable time series classification.
XEM relies on a new hybrid ensemble method that combines an explicit boosting-bagging approach and an implicit divide-and-conquer approach.
Our evaluation shows that XEM outperforms the state-of-the-art MTS classifiers on the public UEA datasets.
arXiv Detail & Related papers (2020-05-07T17:50:18Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.