Feature Selection from High-Dimensional Data with Very Low Sample Size:
A Cautionary Tale
- URL: http://arxiv.org/abs/2008.12025v1
- Date: Thu, 27 Aug 2020 10:00:58 GMT
- Title: Feature Selection from High-Dimensional Data with Very Low Sample Size:
A Cautionary Tale
- Authors: Ludmila I. Kuncheva, Clare E. Matthews, \'Alvar Arnaiz-Gonz\'alez,
Juan J. Rodr\'iguez
- Abstract summary: In classification problems, the purpose of feature selection is to identify a small subset of the original feature set.
This study is a cautionary tale demonstrating why feature selection in such cases may lead to undesirable results.
- Score: 1.491109220586182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In classification problems, the purpose of feature selection is to identify a
small, highly discriminative subset of the original feature set. In many
applications, the dataset may have thousands of features and only a few dozens
of samples (sometimes termed `wide'). This study is a cautionary tale
demonstrating why feature selection in such cases may lead to undesirable
results. In view to highlight the sample size issue, we derive the required
sample size for declaring two features different. Using an example, we
illustrate the heavy dependency between feature set and classifier, which poses
a question to classifier-agnostic feature selection methods. However, the
choice of a good selector-classifier pair is hampered by the low correlation
between estimated and true error rate, as illustrated by another example. While
previous studies raising similar issues validate their message with mostly
synthetic data, here we carried out an experiment with 20 real datasets. We
created an exaggerated scenario whereby we cut a very small portion of the data
(10 instances per class) for feature selection and used the rest of the data
for testing. The results reinforce the caution and suggest that it may be
better to refrain from feature selection from very wide datasets rather than
return misleading output to the user.
Related papers
- Unsupervised Feature Selection Algorithm Based on Dual Manifold Re-ranking [5.840228332438659]
This paper proposes an unsupervised feature selection algorithm based on dual manifold re-ranking (DMRR)
Different similarity matrices are constructed to depict the manifold structures among samples, between samples and features, and among features themselves.
By comparing DMRR with three original unsupervised feature selection algorithms and two unsupervised feature selection post-processing algorithms, experimental results confirm that the importance information of different samples and the dual relationship between sample and feature are beneficial for achieving better feature selection.
arXiv Detail & Related papers (2024-10-27T09:29:17Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - IDEAL: Influence-Driven Selective Annotations Empower In-Context
Learners in Large Language Models [66.32043210237768]
This paper introduces an influence-driven selective annotation method.
It aims to minimize annotation costs while improving the quality of in-context examples.
Experiments confirm the superiority of the proposed method on various benchmarks.
arXiv Detail & Related papers (2023-10-16T22:53:54Z) - Parallel feature selection based on the trace ratio criterion [4.30274561163157]
This work presents a novel parallel feature selection approach for classification, namely Parallel Feature Selection using Trace criterion (PFST)
Our method uses trace criterion, a measure of class separability used in Fisher's Discriminant Analysis, to evaluate feature usefulness.
The experiments show that our method can produce a small set of features in a fraction of the amount of time by the other methods under comparison.
arXiv Detail & Related papers (2022-03-03T10:50:33Z) - Selecting the suitable resampling strategy for imbalanced data
classification regarding dataset properties [62.997667081978825]
In many application domains such as medicine, information retrieval, cybersecurity, social media, etc., datasets used for inducing classification models often have an unequal distribution of the instances of each class.
This situation, known as imbalanced data classification, causes low predictive performance for the minority class examples.
Oversampling and undersampling techniques are well-known strategies to deal with this problem by balancing the number of examples of each class.
arXiv Detail & Related papers (2021-12-15T18:56:39Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Few-shot Learning for Unsupervised Feature Selection [59.75321498170363]
We propose a few-shot learning method for unsupervised feature selection.
The proposed method can select a subset of relevant features in a target task given a few unlabeled target instances.
We experimentally demonstrate that the proposed method outperforms existing feature selection methods.
arXiv Detail & Related papers (2021-07-02T03:52:51Z) - Probabilistic Value Selection for Space Efficient Model [10.109875612945658]
Two probabilistic methods based on information theory's metric are proposed: PVS and P + VS.
Experiment results show that value selection can achieve the balance between accuracy and model size reduction.
arXiv Detail & Related papers (2020-07-09T08:45:13Z) - Improving Multi-Turn Response Selection Models with Complementary
Last-Utterance Selection by Instance Weighting [84.9716460244444]
We consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals.
We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets.
arXiv Detail & Related papers (2020-02-18T06:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.