E2E-FS: An End-to-End Feature Selection Method for Neural Networks
- URL: http://arxiv.org/abs/2012.07671v1
- Date: Mon, 14 Dec 2020 16:19:25 GMT
- Title: E2E-FS: An End-to-End Feature Selection Method for Neural Networks
- Authors: Brais Cancela and Ver\'onica Bol\'on-Canedo and Amparo Alonso-Betanzos
- Abstract summary: We present a novel selection algorithm, called EndtoEnd Feature Selection (E2FS)
Our algorithm, similar to the lasso approach, is solved with gradient descent techniques.
Although hard restrictions, experimental results show that this algorithm can be used with any learning model.
- Score: 0.3222802562733786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classic embedded feature selection algorithms are often divided in two large
groups: tree-based algorithms and lasso variants. Both approaches are focused
in different aspects: while the tree-based algorithms provide a clear
explanation about which variables are being used to trigger a certain output,
lasso-like approaches sacrifice a detailed explanation in favor of increasing
its accuracy. In this paper, we present a novel embedded feature selection
algorithm, called End-to-End Feature Selection (E2E-FS), that aims to provide
both accuracy and explainability in a clever way. Despite having non-convex
regularization terms, our algorithm, similar to the lasso approach, is solved
with gradient descent techniques, introducing some restrictions that force the
model to specifically select a maximum number of features that are going to be
used subsequently by the classifier. Although these are hard restrictions, the
experimental results obtained show that this algorithm can be used with any
learning model that is trained using a gradient descent algorithm.
Related papers
- Fair Feature Subset Selection using Multiobjective Genetic Algorithm [0.0]
We present a feature subset selection approach that improves both fairness and accuracy objectives.
We use statistical disparity as a fairness metric and F1-Score as a metric for model performance.
Our experiments on the most commonly used fairness benchmark datasets show that using the evolutionary algorithm we can effectively explore the trade-off between fairness and accuracy.
arXiv Detail & Related papers (2022-04-30T22:51:19Z) - Per-run Algorithm Selection with Warm-starting using Trajectory-based
Features [5.073358743426584]
Per-instance algorithm selection seeks to recommend, for a given problem instance, one or several suitable algorithms.
We propose an online algorithm selection scheme which we coin per-run algorithm selection.
We show that our approach outperforms static per-instance algorithm selection.
arXiv Detail & Related papers (2022-04-20T14:30:42Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient
adaptive algorithms for neural networks [0.0]
We present a new class of Langevin based algorithms, which overcomes many of the known shortcomings of popular adaptive vanishing algorithms.
In particular, we provide a nonasymptotic analysis and full theoretical guarantees for the convergence properties of an algorithm of this novel class, which we named TH$varepsilon$O POULA (or, simply, TheoPouLa)
arXiv Detail & Related papers (2021-05-28T15:58:48Z) - Two novel features selection algorithms based on crowding distance [0.0]
The proposed algorithms use the crowding distance used in the multiobjective optimization as a metric to sort the features.
The experimental results have shown the effectiveness and the robustness of the proposed algorithms.
arXiv Detail & Related papers (2021-05-11T17:27:56Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Modeling Text with Decision Forests using Categorical-Set Splits [2.434796198711328]
In axis-aligned decision forests, the "decision" to route an input example is the result of the evaluation of a condition on a single dimension in the feature space.
We define a condition that is specific to categorical-set features and present an algorithm to learn it.
Our algorithm is efficient during training and the resulting conditions are fast to evaluate with our extension of the QuickScorer inference algorithm.
arXiv Detail & Related papers (2020-09-21T16:16:35Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.