Weight Predictor Network with Feature Selection for Small Sample Tabular
Biomedical Data
- URL: http://arxiv.org/abs/2211.15616v1
- Date: Mon, 28 Nov 2022 18:17:10 GMT
- Title: Weight Predictor Network with Feature Selection for Small Sample Tabular
Biomedical Data
- Authors: Andrei Margeloiu, Nikola Simidjievski, Pietro Lio, Mateja Jamnik
- Abstract summary: We propose Weight Predictor Network with Feature Selection for learning neural networks from high-dimensional and small sample data.
We evaluate on nine real-world biomedical datasets and demonstrate that WPFS outperforms other standard as well as more recent methods.
- Score: 7.923088041693465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tabular biomedical data is often high-dimensional but with a very small
number of samples. Although recent work showed that well-regularised simple
neural networks could outperform more sophisticated architectures on tabular
data, they are still prone to overfitting on tiny datasets with many
potentially irrelevant features. To combat these issues, we propose Weight
Predictor Network with Feature Selection (WPFS) for learning neural networks
from high-dimensional and small sample data by reducing the number of learnable
parameters and simultaneously performing feature selection. In addition to the
classification network, WPFS uses two small auxiliary networks that together
output the weights of the first layer of the classification model. We evaluate
on nine real-world biomedical datasets and demonstrate that WPFS outperforms
other standard as well as more recent methods typically applied to tabular
data. Furthermore, we investigate the proposed feature selection mechanism and
show that it improves performance while providing useful insights into the
learning task.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Data Augmentations in Deep Weight Spaces [89.45272760013928]
We introduce a novel augmentation scheme based on the Mixup method.
We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate.
arXiv Detail & Related papers (2023-11-15T10:43:13Z) - Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks [17.12834153477201]
We propose a novel resource-efficient supervised feature selection method using sparse neural networks.
By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently.
NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models.
arXiv Detail & Related papers (2023-03-10T17:09:55Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Recurrent neural networks that generalize from examples and optimize by
dreaming [0.0]
We introduce a generalized Hopfield network where pairwise couplings between neurons are built according to Hebb's prescription for on-line learning.
We let the network experience solely a dataset made of a sample of noisy examples for each pattern.
Remarkably, the sleeping mechanisms always significantly reduce the dataset size required to correctly generalize.
arXiv Detail & Related papers (2022-04-17T08:40:54Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Feature Selection Based on Sparse Neural Network Layer with Normalizing
Constraints [0.0]
We propose new neural-network based feature selection approach that introduces two constrains, the satisfying of which leads to sparse FS layer.
The results confirm that proposed Feature Selection Based on Sparse Neural Network Layer with Normalizing Constraints (SNEL-FS) is able to select the important features and yields superior performance compared to other conventional FS methods.
arXiv Detail & Related papers (2020-12-11T14:14:33Z) - Convolution Neural Networks for Semantic Segmentation: Application to
Small Datasets of Biomedical Images [0.0]
This thesis studies how the segmentation results, produced by convolutional neural networks (CNN), is different from each other when applied to small biomedical datasets.
Two working datasets are from biomedical area of research.
arXiv Detail & Related papers (2020-11-01T19:09:12Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - FsNet: Feature Selection Network on High-dimensional Biological Data [16.212816276636087]
We propose a deep neural network (DNN)-based, nonlinear feature selection method, called the feature selection network (FsNet) for high-dimensional and small number of sample data.
FsNet comprises a selection layer that selects features and a reconstruction layer that stabilizes the training.
Because a large number of parameters in the selection and reconstruction layers can easily result in overfitting under a limited number of samples, we use two tiny networks to predict the large, virtual weight matrices of the selection and reconstruction layers.
arXiv Detail & Related papers (2020-01-23T00:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.