FsNet: Feature Selection Network on High-dimensional Biological Data
- URL: http://arxiv.org/abs/2001.08322v3
- Date: Fri, 18 Dec 2020 00:48:37 GMT
- Title: FsNet: Feature Selection Network on High-dimensional Biological Data
- Authors: Dinesh Singh and H\'ector Climente-Gonz\'alez and Mathis Petrovich and
Eiryo Kawakami and Makoto Yamada
- Abstract summary: We propose a deep neural network (DNN)-based, nonlinear feature selection method, called the feature selection network (FsNet) for high-dimensional and small number of sample data.
FsNet comprises a selection layer that selects features and a reconstruction layer that stabilizes the training.
Because a large number of parameters in the selection and reconstruction layers can easily result in overfitting under a limited number of samples, we use two tiny networks to predict the large, virtual weight matrices of the selection and reconstruction layers.
- Score: 16.212816276636087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biological data including gene expression data are generally high-dimensional
and require efficient, generalizable, and scalable machine-learning methods to
discover their complex nonlinear patterns. The recent advances in machine
learning can be attributed to deep neural networks (DNNs), which excel in
various tasks in terms of computer vision and natural language processing.
However, standard DNNs are not appropriate for high-dimensional datasets
generated in biology because they have many parameters, which in turn require
many samples. In this paper, we propose a DNN-based, nonlinear feature
selection method, called the feature selection network (FsNet), for
high-dimensional and small number of sample data. Specifically, FsNet comprises
a selection layer that selects features and a reconstruction layer that
stabilizes the training. Because a large number of parameters in the selection
and reconstruction layers can easily result in overfitting under a limited
number of samples, we use two tiny networks to predict the large, virtual
weight matrices of the selection and reconstruction layers. Experimental
results on several real-world, high-dimensional biological datasets demonstrate
the efficacy of the proposed method.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Sparse-Input Neural Network using Group Concave Regularization [10.103025766129006]
Simultaneous feature selection and non-linear function estimation are challenging in neural networks.
We propose a framework of sparse-input neural networks using group concave regularization for feature selection in both low-dimensional and high-dimensional settings.
arXiv Detail & Related papers (2023-07-01T13:47:09Z) - Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks [17.12834153477201]
We propose a novel resource-efficient supervised feature selection method using sparse neural networks.
By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently.
NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models.
arXiv Detail & Related papers (2023-03-10T17:09:55Z) - Weight Predictor Network with Feature Selection for Small Sample Tabular
Biomedical Data [7.923088041693465]
We propose Weight Predictor Network with Feature Selection for learning neural networks from high-dimensional and small sample data.
We evaluate on nine real-world biomedical datasets and demonstrate that WPFS outperforms other standard as well as more recent methods.
arXiv Detail & Related papers (2022-11-28T18:17:10Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Feature Selection Based on Sparse Neural Network Layer with Normalizing
Constraints [0.0]
We propose new neural-network based feature selection approach that introduces two constrains, the satisfying of which leads to sparse FS layer.
The results confirm that proposed Feature Selection Based on Sparse Neural Network Layer with Normalizing Constraints (SNEL-FS) is able to select the important features and yields superior performance compared to other conventional FS methods.
arXiv Detail & Related papers (2020-12-11T14:14:33Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z) - A study of local optima for learning feature interactions using neural
networks [4.94950858749529]
We study the datastarved regime where a NN is trained on a relatively small amount of training data.
We experimentally observed that the cross-entropy loss function on XOR-like data has many non-equivalent local optima.
We show that the performance of a NN on real datasets can be improved using pruning.
arXiv Detail & Related papers (2020-02-11T11:38:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.