A post-selection algorithm for improving dynamic ensemble selection
methods
- URL: http://arxiv.org/abs/2309.14307v2
- Date: Tue, 26 Sep 2023 12:18:39 GMT
- Title: A post-selection algorithm for improving dynamic ensemble selection
methods
- Authors: Paulo R.G. Cordeiro, George D.C. Cavalcanti and Rafael M.O. Cruz
- Abstract summary: Post-Selection Dynamic Ensemble Selection (PS-DES) is a post-selection scheme that evaluates ensembles selected by several DES techniques using different metrics.
Using accuracy as a metric to select the ensembles, PS-DES performs better than individual DES techniques.
- Score: 6.59003008107689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic Ensemble Selection (DES) is a Multiple Classifier Systems (MCS)
approach that aims to select an ensemble for each query sample during the
selection phase. Even with the proposal of several DES approaches, no
particular DES technique is the best choice for different problems. Thus, we
hypothesize that selecting the best DES approach per query instance can lead to
better accuracy. To evaluate this idea, we introduce the Post-Selection Dynamic
Ensemble Selection (PS-DES) approach, a post-selection scheme that evaluates
ensembles selected by several DES techniques using different metrics.
Experimental results show that using accuracy as a metric to select the
ensembles, PS-DES performs better than individual DES techniques. PS-DES source
code is available in a GitHub repository
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - BWS: Best Window Selection Based on Sample Scores for Data Pruning across Broad Ranges [12.248397169100784]
Data subset selection aims to find a smaller yet informative subset of a large dataset that can approximate the full-dataset training.
We introduce a universal and efficient data subset selection method, Best Window Selection (BWS), by proposing a method to choose the best window subset from samples ordered based on their difficulty scores.
arXiv Detail & Related papers (2024-06-05T08:33:09Z) - Towards Global Optimal Visual In-Context Learning Prompt Selection [50.174301123013045]
We propose a novel in-context example selection framework to identify the global optimal prompt.
Our method, dubbed Partial2Global, adopts a transformer-based list-wise ranker to provide a more comprehensive comparison.
The effectiveness of Partial2Global is validated through experiments on foreground segmentation, single object detection and image colorization.
arXiv Detail & Related papers (2024-05-24T07:07:24Z) - Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z) - Finding Optimal Diverse Feature Sets with Alternative Feature Selection [0.0]
We introduce alternative feature selection and formalize it as an optimization problem.
In particular, we define alternatives via constraints and enable users to control the number and dissimilarity of alternatives.
We show that a constant-factor approximation exists under certain conditions and propose corresponding search methods.
arXiv Detail & Related papers (2023-07-21T14:23:41Z) - Improving Probability-based Prompt Selection Through Unified Evaluation
and Analysis [52.04932081106623]
We propose a unified framework to interpret and evaluate the existing probability-based prompt selection methods.
We find that each of the existing methods can be interpreted as some variant of the method that maximizes mutual information between the input and the predicted output (MI)
We propose a novel calibration method called by Marginalization (CBM) that is to the existing methods and helps increase the prompt selection effectiveness of the best method to 96.85%, achieving 99.44% of the oracle prompt F1 without calibration.
arXiv Detail & Related papers (2023-05-24T08:29:50Z) - Meta-Learning Approaches for a One-Shot Collective-Decision Aggregation:
Correctly Choosing how to Choose Correctly [0.7874708385247353]
We present two one-shot machine-learning-based aggregation approaches.
The first predicts, given multiple features about the collective's choices, which aggregation method will be best for a given case.
The second directly predicts which decision is optimal, given, among other things, the selection made by each method.
arXiv Detail & Related papers (2022-04-03T15:06:59Z) - Max-Utility Based Arm Selection Strategy For Sequential Query
Recommendations [16.986870945319293]
We consider the query recommendation problem in closed loop interactive learning settings like online information gathering and exploratory analytics.
The problem can be naturally modelled using the Multi-Armed Bandits (MAB) framework with countably many arms.
We show that such a selection strategy often results in higher cumulative regret and to this end, we propose a selection strategy based on the maximum utility of the arms.
arXiv Detail & Related papers (2021-08-31T13:03:30Z) - Adversarial Option-Aware Hierarchical Imitation Learning [89.92994158193237]
We propose Option-GAIL, a novel method to learn skills at long horizon.
The key idea of Option-GAIL is modeling the task hierarchy by options and train the policy via generative adversarial optimization.
Experiments show that Option-GAIL outperforms other counterparts consistently across a variety of tasks.
arXiv Detail & Related papers (2021-06-10T06:42:05Z) - Lookahead and Hybrid Sample Allocation Procedures for Multiple Attribute
Selection Decisions [0.9137554315375922]
This paper considers settings in which each measurement yields one sample of one attribute for one alternative.
When given a fixed number of samples to collect, the decision-maker must determine which samples to obtain, make the measurements, update prior beliefs about the attribute magnitudes, and then select an alternative.
arXiv Detail & Related papers (2020-07-31T15:04:49Z) - Multi-Task Multicriteria Hyperparameter Optimization [77.34726150561087]
The article begins with a mathematical formulation of the problem of choosing optimal hyperparameters.
The steps of the MTMC method that solves this problem are described.
The proposed method is evaluated on the image classification problem using a convolutional neural network.
arXiv Detail & Related papers (2020-02-15T12:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.