Deep Feature Screening: Feature Selection for Ultra High-Dimensional
Data via Deep Neural Networks
- URL: http://arxiv.org/abs/2204.01682v3
- Date: Sat, 16 Dec 2023 11:17:51 GMT
- Title: Deep Feature Screening: Feature Selection for Ultra High-Dimensional
Data via Deep Neural Networks
- Authors: Kexuan Li, Fangfang Wang, Lingli Yang, Ruiqi Liu
- Abstract summary: We propose a novel two-step nonparametric approach called Deep Feature Screening (DeepFS)
DeepFS can identify significant features with high precision for ultra high-dimensional, low-sample-size data.
The superiority of DeepFS is demonstrated via extensive simulation studies and real data analyses.
- Score: 4.212520096619388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The applications of traditional statistical feature selection methods to
high-dimension, low sample-size data often struggle and encounter challenging
problems, such as overfitting, curse of dimensionality, computational
infeasibility, and strong model assumption. In this paper, we propose a novel
two-step nonparametric approach called Deep Feature Screening (DeepFS) that can
overcome these problems and identify significant features with high precision
for ultra high-dimensional, low-sample-size data. This approach first extracts
a low-dimensional representation of input data and then applies feature
screening based on multivariate rank distance correlation recently developed by
Deb and Sen (2021). This approach combines the strengths of both deep neural
networks and feature screening, and thereby has the following appealing
features in addition to its ability of handling ultra high-dimensional data
with small number of samples: (1) it is model free and distribution free; (2)
it can be used for both supervised and unsupervised feature selection; and (3)
it is capable of recovering the original input data. The superiority of DeepFS
is demonstrated via extensive simulation studies and real data analyses.
Related papers
- Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Deep Learning for Efficient GWAS Feature Selection [0.0]
This paper introduces an extension to the feature selection methodology proposed by Mirzaei et al.
Our extended approach enhances the original method by introducing a Frobenius norm penalty into the student network.
operating seamlessly in both supervised and unsupervised settings, our method employs two key neural networks.
arXiv Detail & Related papers (2023-12-22T20:35:47Z) - Learning in latent spaces improves the predictive accuracy of deep
neural operators [0.0]
L-DeepONet is an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders.
We show that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs.
arXiv Detail & Related papers (2023-04-15T17:13:09Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - The Imaginative Generative Adversarial Network: Automatic Data
Augmentation for Dynamic Skeleton-Based Hand Gesture and Human Action
Recognition [27.795763107984286]
We present a novel automatic data augmentation model, which approximates the distribution of the input data and samples new data from this distribution.
Our results show that the augmentation strategy is fast to train and can improve classification accuracy for both neural networks and state-of-the-art methods.
arXiv Detail & Related papers (2021-05-27T11:07:09Z) - Feature Selection Based on Sparse Neural Network Layer with Normalizing
Constraints [0.0]
We propose new neural-network based feature selection approach that introduces two constrains, the satisfying of which leads to sparse FS layer.
The results confirm that proposed Feature Selection Based on Sparse Neural Network Layer with Normalizing Constraints (SNEL-FS) is able to select the important features and yields superior performance compared to other conventional FS methods.
arXiv Detail & Related papers (2020-12-11T14:14:33Z) - Image-based Automated Species Identification: Can Virtual Data
Augmentation Overcome Problems of Insufficient Sampling? [0.0]
We present a two-level data augmentation approach to automated visual species identification.
The first level of data augmentation applies classic approaches of data augmentation and generation of faked images.
The second level of data augmentation employs synthetic additional sampling in feature space by an oversampling algorithm in vector space.
arXiv Detail & Related papers (2020-10-18T15:44:45Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.