Feature Selection Using Batch-Wise Attenuation and Feature Mask
Normalization
- URL: http://arxiv.org/abs/2010.13631v3
- Date: Fri, 23 Apr 2021 14:28:38 GMT
- Title: Feature Selection Using Batch-Wise Attenuation and Feature Mask
Normalization
- Authors: Yiwen Liao, Rapha\"el Latty, Bin Yang
- Abstract summary: This paper proposes a feature mask module (FM- module) for feature selection based on a novel batch-wise attenuation and feature mask normalization.
Experiments on popular image, text and speech datasets have shown that our approach is easy to use and has superior performance in comparison with other state-of-the-art deep-learning-based feature selection methods.
- Score: 6.6357750579293935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature selection is generally used as one of the most important
preprocessing techniques in machine learning, as it helps to reduce the
dimensionality of data and assists researchers and practitioners in
understanding data. Thereby, by utilizing feature selection, better performance
and reduced computational consumption, memory complexity and even data amount
can be expected. Although there exist approaches leveraging the power of deep
neural networks to carry out feature selection, many of them often suffer from
sensitive hyperparameters. This paper proposes a feature mask module
(FM-module) for feature selection based on a novel batch-wise attenuation and
feature mask normalization. The proposed method is almost free from
hyperparameters and can be easily integrated into common neural networks as an
embedded feature selection method. Experiments on popular image, text and
speech datasets have shown that our approach is easy to use and has superior
performance in comparison with other state-of-the-art deep-learning-based
feature selection methods.
Related papers
- Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - A Performance-Driven Benchmark for Feature Selection in Tabular Deep
Learning [131.2910403490434]
Data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones.
Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance.
We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers.
We also propose an input-gradient-based analogue of Lasso for neural networks that outperforms classical feature selection methods on challenging problems.
arXiv Detail & Related papers (2023-11-10T05:26:10Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Towards Free Data Selection with General-Purpose Models [71.92151210413374]
A desirable data selection algorithm can efficiently choose the most informative samples to maximize the utility of limited annotation budgets.
Current approaches, represented by active learning methods, typically follow a cumbersome pipeline that iterates the time-consuming model training and batch data selection repeatedly.
FreeSel bypasses the heavy batch selection process, achieving a significant improvement in efficiency and being 530x faster than existing active learning methods.
arXiv Detail & Related papers (2023-09-29T15:50:14Z) - Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks [17.12834153477201]
We propose a novel resource-efficient supervised feature selection method using sparse neural networks.
By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently.
NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models.
arXiv Detail & Related papers (2023-03-10T17:09:55Z) - Deep Feature Selection Using a Novel Complementary Feature Mask [5.904240881373805]
We deal with feature selection by exploiting the features with less importance scores.
We propose a feature selection framework based on a novel complementary feature mask.
Our method is generic and can be easily integrated into existing deep-learning-based feature selection approaches.
arXiv Detail & Related papers (2022-09-25T18:03:30Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - A concise method for feature selection via normalized frequencies [0.0]
In this paper, a concise method is proposed for universal feature selection.
The proposed method uses a fusion of the filter method and the wrapper method, rather than a combination of them.
The evaluation results show that the proposed method outperformed several state-of-the-art related works in terms of accuracy, precision, recall, F-score and AUC.
arXiv Detail & Related papers (2021-06-10T15:29:54Z) - Quick and Robust Feature Selection: the Strength of Energy-efficient
Sparse Training for Autoencoders [4.561081324313315]
Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem.
Most of the existing feature selection methods are computationally inefficient.
In this paper, a novel and flexible method for unsupervised feature selection is proposed.
arXiv Detail & Related papers (2020-12-01T15:05:15Z) - Binary Stochastic Filtering: feature selection and beyond [0.0]
This work aims at extending the neural network with ability to automatically select features by rethinking how the sparsity regularization can be used.
The proposed method has demonstrated superior efficiency when compared to a few classical methods, achieved with minimal or no computational overhead.
arXiv Detail & Related papers (2020-07-08T06:57:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.