Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks
- URL: http://arxiv.org/abs/2303.07200v2
- Date: Tue, 14 Mar 2023 08:17:19 GMT
- Title: Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks
- Authors: Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola
Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
- Abstract summary: We propose a novel resource-efficient supervised feature selection method using sparse neural networks.
By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently.
NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models.
- Score: 17.12834153477201
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Feature selection that selects an informative subset of variables from data
not only enhances the model interpretability and performance but also
alleviates the resource demands. Recently, there has been growing attention on
feature selection using neural networks. However, existing methods usually
suffer from high computational costs when applied to high-dimensional datasets.
In this paper, inspired by evolution processes, we propose a novel
resource-efficient supervised feature selection method using sparse neural
networks, named \enquote{NeuroFS}. By gradually pruning the uninformative
features from the input layer of a sparse neural network trained from scratch,
NeuroFS derives an informative subset of features efficiently. By performing
several experiments on $11$ low and high-dimensional real-world benchmarks of
different types, we demonstrate that NeuroFS achieves the highest ranking-based
score among the considered state-of-the-art supervised feature selection
models. The code is available on GitHub.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A Hierarchical Fused Quantum Fuzzy Neural Network for Image Classification [8.7057403071943]
We proposed a novel hierarchical fused quantum fuzzy neural network (HQFNN)
HQFNN uses quantum neural networks to learn fuzzy membership functions in fuzzy neural network.
Results show that the proposed model can outperform several existing methods.
arXiv Detail & Related papers (2024-03-14T12:09:36Z) - A Performance-Driven Benchmark for Feature Selection in Tabular Deep
Learning [131.2910403490434]
Data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones.
Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance.
We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers.
We also propose an input-gradient-based analogue of Lasso for neural networks that outperforms classical feature selection methods on challenging problems.
arXiv Detail & Related papers (2023-11-10T05:26:10Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Simple and complex spiking neurons: perspectives and analysis in a
simple STDP scenario [0.7829352305480283]
Spiking neural networks (SNNs) are inspired by biology and neuroscience to create fast and efficient learning systems.
This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities.
We make a comparative study of three simple I&F neuron models, namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system.
arXiv Detail & Related papers (2022-06-28T10:01:51Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Contextual HyperNetworks for Novel Feature Adaptation [43.49619456740745]
Contextual HyperNetwork (CHN) generates parameters for extending the base model to a new feature.
At prediction time, the CHN requires only a single forward pass through a neural network, yielding a significant speed-up.
We show that this system obtains improved few-shot learning performance for novel features over existing imputation and meta-learning baselines.
arXiv Detail & Related papers (2021-04-12T23:19:49Z) - Feature Selection Based on Sparse Neural Network Layer with Normalizing
Constraints [0.0]
We propose new neural-network based feature selection approach that introduces two constrains, the satisfying of which leads to sparse FS layer.
The results confirm that proposed Feature Selection Based on Sparse Neural Network Layer with Normalizing Constraints (SNEL-FS) is able to select the important features and yields superior performance compared to other conventional FS methods.
arXiv Detail & Related papers (2020-12-11T14:14:33Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.