A Performance-Driven Benchmark for Feature Selection in Tabular Deep
Learning
- URL: http://arxiv.org/abs/2311.05877v1
- Date: Fri, 10 Nov 2023 05:26:10 GMT
- Title: A Performance-Driven Benchmark for Feature Selection in Tabular Deep
Learning
- Authors: Valeriia Cherepanova, Roman Levin, Gowthami Somepalli, Jonas Geiping,
C. Bayan Bruss, Andrew Gordon Wilson, Tom Goldstein, Micah Goldblum
- Abstract summary: Data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones.
Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance.
We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers.
We also propose an input-gradient-based analogue of Lasso for neural networks that outperforms classical feature selection methods on challenging problems.
- Score: 131.2910403490434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Academic tabular benchmarks often contain small sets of curated features. In
contrast, data scientists typically collect as many features as possible into
their datasets, and even engineer new features from existing ones. To prevent
overfitting in subsequent downstream modeling, practitioners commonly use
automated feature selection methods that identify a reduced subset of
informative features. Existing benchmarks for tabular feature selection
consider classical downstream models, toy synthetic datasets, or do not
evaluate feature selectors on the basis of downstream performance. Motivated by
the increasing popularity of tabular deep learning, we construct a challenging
feature selection benchmark evaluated on downstream neural networks including
transformers, using real datasets and multiple methods for generating
extraneous features. We also propose an input-gradient-based analogue of Lasso
for neural networks that outperforms classical feature selection methods on
challenging problems such as selecting from corrupted or second-order features.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Neuro-Symbolic Embedding for Short and Effective Feature Selection via Autoregressive Generation [22.87577374767465]
We reformulate feature selection through a neuro-symbolic lens and introduce a novel generative framework aimed at identifying short and effective feature subsets.
In this framework, we first create a data collector to automatically collect numerous feature selection samples consisting of feature ID tokens, model performance, and the measurement of feature subset redundancy.
Building on the collected data, an encoder-decoder-evaluator learning paradigm is developed to preserve the intelligence of feature selection into a continuous embedding space for efficient search.
arXiv Detail & Related papers (2024-04-26T05:01:08Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks [17.12834153477201]
We propose a novel resource-efficient supervised feature selection method using sparse neural networks.
By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently.
NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models.
arXiv Detail & Related papers (2023-03-10T17:09:55Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Feature Selection Using Reinforcement Learning [0.0]
The space of variables or features that can be used to characterize a particular predictor of interest continues to grow exponentially.
Identifying the most characterizing features that minimizes the variance without jeopardizing the bias of our models is critical to successfully training a machine learning model.
arXiv Detail & Related papers (2021-01-23T09:24:37Z) - Feature Selection Using Batch-Wise Attenuation and Feature Mask
Normalization [6.6357750579293935]
This paper proposes a feature mask module (FM- module) for feature selection based on a novel batch-wise attenuation and feature mask normalization.
Experiments on popular image, text and speech datasets have shown that our approach is easy to use and has superior performance in comparison with other state-of-the-art deep-learning-based feature selection methods.
arXiv Detail & Related papers (2020-10-26T14:46:38Z) - On Feature Selection Using Anisotropic General Regression Neural Network [3.880707330499936]
The presence of irrelevant features in the input dataset tends to reduce the interpretability and predictive quality of machine learning models.
Here we show how the General Regression Neural Network used with an anisotropic Gaussian Kernel can be used to perform feature selection.
arXiv Detail & Related papers (2020-10-12T14:35:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.