Developing parsimonious ensembles using predictor diversity within a
reinforcement learning framework
- URL: http://arxiv.org/abs/2102.07344v1
- Date: Mon, 15 Feb 2021 05:00:19 GMT
- Title: Developing parsimonious ensembles using predictor diversity within a
reinforcement learning framework
- Authors: Ana Stanescu and Gaurav Pandey
- Abstract summary: We present several algorithms that incorporate ensemble diversity into a reinforcement learning (RL)-based ensemble selection framework.
These algorithms can eventually aid the interpretation or reverse engineering of predictive models assimilated into effective ensembles.
- Score: 4.204145943086225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Heterogeneous ensembles that can aggregate an unrestricted number and variety
of base predictors can effectively address challenging prediction problems. In
particular, accurate ensembles that are also parsimonious, i.e., consist of as
few base predictors as possible, can help reveal potentially useful knowledge
about the target problem domain. Although ensemble selection offers a potential
approach to achieving these goals, the currently available algorithms are
limited in their abilities. In this paper, we present several algorithms that
incorporate ensemble diversity into a reinforcement learning (RL)-based
ensemble selection framework to build accurate and parsimonious ensembles.
These algorithms, as well as several baselines, are rigorously evaluated on
datasets from diverse domains in terms of the predictive performance and
parsimony of their ensembles. This evaluation demonstrates that our
diversity-incorporated RL-based algorithms perform better than the others for
constructing simultaneously accurate and parsimonious ensembles. These
algorithms can eventually aid the interpretation or reverse engineering of
predictive models assimilated into effective ensembles. To enable such a
translation, an implementation of these algorithms, as well the experimental
setup they are evaluated in, has been made available at
https://github.com/GauravPandeyLab/lens-learning-ensembles-using-reinforcement-learning.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Evaluating Ensemble Methods for News Recommender Systems [50.90330146667386]
This paper demonstrates how ensemble methods can be used to combine many diverse state-of-the-art algorithms to achieve superior results on the Microsoft News dataset (MIND)
Our findings demonstrate that a combination of NRS algorithms can outperform individual algorithms, provided that the base learners are sufficiently diverse.
arXiv Detail & Related papers (2024-06-23T13:40:50Z) - Quantized Hierarchical Federated Learning: A Robust Approach to
Statistical Heterogeneity [3.8798345704175534]
We present a novel hierarchical federated learning algorithm that incorporates quantization for communication-efficiency.
We offer a comprehensive analytical framework to evaluate its optimality gap and convergence rate.
Our findings reveal that our algorithm consistently achieves high learning accuracy over a range of parameters.
arXiv Detail & Related papers (2024-03-03T15:40:24Z) - Towards a Systematic Approach to Design New Ensemble Learning Algorithms [0.0]
This study revisits the foundational work on ensemble error decomposition.
Recent advancements introduced a "unified theory of diversity"
Our research systematically explores the application of this decomposition to guide the creation of new ensemble learning algorithms.
arXiv Detail & Related papers (2024-02-09T22:59:20Z) - Structurally Diverse Sampling Reduces Spurious Correlations in Semantic
Parsing Datasets [51.095144091781734]
We propose a novel algorithm for sampling a structurally diverse set of instances from a labeled instance pool with structured outputs.
We show that our algorithm performs competitively with or better than prior algorithms in not only compositional template splits but also traditional IID splits.
In general, we find that diverse train sets lead to better generalization than random training sets of the same size in 9 out of 10 dataset-split pairs.
arXiv Detail & Related papers (2022-03-16T07:41:27Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - HAWKS: Evolving Challenging Benchmark Sets for Cluster Analysis [2.5329716878122404]
Comprehensive benchmarking of clustering algorithms is difficult.
There is no consensus regarding the best practice for rigorous benchmarking.
We demonstrate the important role evolutionary algorithms play to support flexible generation of such benchmarks.
arXiv Detail & Related papers (2021-02-13T15:01:34Z) - A Comparative Analysis of the Ensemble Methods for Drug Design [0.0]
Ensemble-based machine learning approaches have been used to overcome limitations and generate reliable predictions.
In this article, 57 algorithms were developed and compared on 4 different datasets.
The proposed individual models did not show impressive results as a unified model, but it was considered the most important predictor when combined.
arXiv Detail & Related papers (2020-12-11T05:27:20Z) - Combining Task Predictors via Enhancing Joint Predictability [53.46348489300652]
We present a new predictor combination algorithm that improves the target by i) measuring the relevance of references based on their capabilities in predicting the target, and ii) strengthening such estimated relevance.
Our algorithm jointly assesses the relevance of all references by adopting a Bayesian framework.
Based on experiments on seven real-world datasets from visual attribute ranking and multi-class classification scenarios, we demonstrate that our algorithm offers a significant performance gain and broadens the application range of existing predictor combination approaches.
arXiv Detail & Related papers (2020-07-15T21:58:39Z) - Neural Ensemble Search for Uncertainty Estimation and Dataset Shift [67.57720300323928]
Ensembles of neural networks achieve superior performance compared to stand-alone networks in terms of accuracy, uncertainty calibration and robustness to dataset shift.
We propose two methods for automatically constructing ensembles with emphvarying architectures.
We show that the resulting ensembles outperform deep ensembles not only in terms of accuracy but also uncertainty calibration and robustness to dataset shift.
arXiv Detail & Related papers (2020-06-15T17:38:15Z) - Active Learning in Video Tracking [8.782204980889079]
We propose an adversarial approach for active learning with structured prediction domains that is tractable for matching.
We evaluate this approach algorithmically in an important structured prediction problems: object tracking in videos.
arXiv Detail & Related papers (2019-12-29T00:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.