Flexible variable selection in the presence of missing data
- URL: http://arxiv.org/abs/2202.12989v4
- Date: Tue, 21 Nov 2023 16:59:22 GMT
- Title: Flexible variable selection in the presence of missing data
- Authors: B. D. Williamson and Y. Huang
- Abstract summary: We propose a non-parametric variable selection algorithm combined with multiple imputation to develop flexible panels in the presence of missing-at-random data.
We show that our proposal has good operating characteristics and results in panels with higher classification and variable selection performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many applications, it is of interest to identify a parsimonious set of
features, or panel, from multiple candidates that achieves a desired level of
performance in predicting a response. This task is often complicated in
practice by missing data arising from the sampling design or other random
mechanisms. Most recent work on variable selection in missing data contexts
relies in some part on a finite-dimensional statistical model, e.g., a
generalized or penalized linear model. In cases where this model is
misspecified, the selected variables may not all be truly scientifically
relevant and can result in panels with suboptimal classification performance.
To address this limitation, we propose a nonparametric variable selection
algorithm combined with multiple imputation to develop flexible panels in the
presence of missing-at-random data. We outline strategies based on the proposed
algorithm that achieve control of commonly used error rates. Through
simulations, we show that our proposal has good operating characteristics and
results in panels with higher classification and variable selection performance
compared to several existing penalized regression approaches in cases where a
generalized linear model is misspecified. Finally, we use the proposed method
to develop biomarker panels for separating pancreatic cysts with differing
malignancy potential in a setting where complicated missingness in the
biomarkers arose due to limited specimen volumes.
Related papers
- Plug-and-Play Controllable Generation for Discrete Masked Models [27.416952690340903]
This article makes discrete masked models for the generative modeling of discrete data controllable.
We propose a novel plug-and-play framework based on importance sampling that bypasses the need for training a conditional score.
Our framework is agnostic to the choice of control criteria, requires no gradient information, and is well-suited for tasks such as posterior sampling, Bayesian inverse problems, and constrained generation.
arXiv Detail & Related papers (2024-10-03T02:00:40Z) - Embedded Multi-label Feature Selection via Orthogonal Regression [45.55795914923279]
State-of-the-art embedded multi-label feature selection algorithms based on at least square regression cannot preserve sufficient discriminative information in multi-label data.
A novel embedded multi-label feature selection method is proposed to facilitate the multi-label feature selection.
Extensive experimental results on ten multi-label data sets demonstrate the effectiveness of GRROOR.
arXiv Detail & Related papers (2024-03-01T06:18:40Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - A model-free feature selection technique of feature screening and random
forest based recursive feature elimination [0.0]
We propose a model-free feature selection method for ultra-high dimensional data with mass features.
We show that the proposed method is selection consistent and $L$ consistent under weak regularity conditions.
arXiv Detail & Related papers (2023-02-15T03:39:16Z) - Posterior Collapse and Latent Variable Non-identifiability [54.842098835445]
We propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility.
Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
arXiv Detail & Related papers (2023-01-02T06:16:56Z) - Composite Feature Selection using Deep Ensembles [130.72015919510605]
We investigate the problem of discovering groups of predictive features without predefined grouping.
We introduce a novel deep learning architecture that uses an ensemble of feature selection models to find predictive groups.
We propose a new metric to measure similarity between discovered groups and the ground truth.
arXiv Detail & Related papers (2022-11-01T17:49:40Z) - Bayesian Variable Selection in a Million Dimensions [7.366246663367533]
We introduce an efficient MCMC scheme whose cost per iteration is sublinear in P.
We show how this scheme can be extended to generalized linear models for count data.
In experiments we demonstrate the effectiveness of our methods, including on cancer and maize genomic data.
arXiv Detail & Related papers (2022-08-02T00:11:15Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Selecting the suitable resampling strategy for imbalanced data
classification regarding dataset properties [62.997667081978825]
In many application domains such as medicine, information retrieval, cybersecurity, social media, etc., datasets used for inducing classification models often have an unequal distribution of the instances of each class.
This situation, known as imbalanced data classification, causes low predictive performance for the minority class examples.
Oversampling and undersampling techniques are well-known strategies to deal with this problem by balancing the number of examples of each class.
arXiv Detail & Related papers (2021-12-15T18:56:39Z) - Variable selection with missing data in both covariates and outcomes:
Imputation and machine learning [1.0333430439241666]
The missing data issue is ubiquitous in health studies.
Machine learning methods weaken parametric assumptions.
XGBoost and BART have the overall best performance across various settings.
arXiv Detail & Related papers (2021-04-06T20:18:29Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.