Unsupervised Feature Selection Algorithm Based on Dual Manifold Re-ranking
- URL: http://arxiv.org/abs/2410.20388v1
- Date: Sun, 27 Oct 2024 09:29:17 GMT
- Title: Unsupervised Feature Selection Algorithm Based on Dual Manifold Re-ranking
- Authors: Yunhui Liang, Jianwen Gan, Yan Chen, Peng Zhou, Liang Du,
- Abstract summary: This paper proposes an unsupervised feature selection algorithm based on dual manifold re-ranking (DMRR)
Different similarity matrices are constructed to depict the manifold structures among samples, between samples and features, and among features themselves.
By comparing DMRR with three original unsupervised feature selection algorithms and two unsupervised feature selection post-processing algorithms, experimental results confirm that the importance information of different samples and the dual relationship between sample and feature are beneficial for achieving better feature selection.
- Score: 5.840228332438659
- License:
- Abstract: High-dimensional data is commonly encountered in numerous data analysis tasks. Feature selection techniques aim to identify the most representative features from the original high-dimensional data. Due to the absence of class label information, it is significantly more challenging to select appropriate features in unsupervised learning scenarios compared to supervised ones. Traditional unsupervised feature selection methods typically score the features of samples based on certain criteria, treating samples indiscriminately. However, these approaches fail to fully capture the internal structure of the data. The importance of different samples should vary, and there is a dual relationship between the weight of samples and features that will influence each other. Therefore, an unsupervised feature selection algorithm based on dual manifold re-ranking (DMRR) is proposed in this paper. Different similarity matrices are constructed to depict the manifold structures among samples, between samples and features, and among features themselves. Then, manifold re-ranking is performed by combining the initial scores of samples and features. By comparing DMRR with three original unsupervised feature selection algorithms and two unsupervised feature selection post-processing algorithms, experimental results confirm that the importance information of different samples and the dual relationship between sample and feature are beneficial for achieving better feature selection.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - Contributing Dimension Structure of Deep Feature for Coreset Selection [26.759457501199822]
Coreset selection seeks to choose a subset of crucial training samples for efficient learning.
Sample selection hinges on two main aspects: a sample's representation in enhancing performance and the role of sample diversity in averting overfitting.
Existing methods typically measure both the representation and diversity of data based on similarity metrics.
arXiv Detail & Related papers (2024-01-29T14:47:26Z) - Unified View Imputation and Feature Selection Learning for Incomplete
Multi-view Data [13.079847265195127]
Multi-view unsupervised feature selection (MUFS) is an effective technology for reducing dimensionality in machine learning.
Existing methods cannot directly deal with incomplete multi-view data where some samples are missing in certain views.
UNIFIER explores the local structure of multi-view data by adaptively learning similarity-induced graphs from both the sample and feature spaces.
arXiv Detail & Related papers (2024-01-19T08:26:44Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Automated Supervised Feature Selection for Differentiated Patterns of
Care [5.3825788156200565]
The pipeline included three types of feature selection techniques; Filters, Wrappers and Embedded methods to select the top K features.
The selected features were tested in the existing multi-dimensional subset scanning (MDSS) where the most anomalous subpopulations, most anomalous subsets, propensity scores, and effect of measures were recorded to test their performance.
arXiv Detail & Related papers (2021-11-05T13:27:18Z) - Auto-weighted Multi-view Feature Selection with Graph Optimization [90.26124046530319]
We propose a novel unsupervised multi-view feature selection model based on graph learning.
The contributions are threefold: (1) during the feature selection procedure, the consensus similarity graph shared by different views is learned.
Experiments on various datasets demonstrate the superiority of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T03:25:25Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Supervised Feature Subset Selection and Feature Ranking for Multivariate
Time Series without Feature Extraction [78.84356269545157]
We introduce supervised feature ranking and feature subset selection algorithms for MTS classification.
Unlike most existing supervised/unsupervised feature selection algorithms for MTS our techniques do not require a feature extraction step to generate a one-dimensional feature vector from the time series.
arXiv Detail & Related papers (2020-05-01T07:46:29Z) - Outlier Detection Ensemble with Embedded Feature Selection [42.8338013000469]
We propose an outlier detection ensemble framework with embedded feature selection (ODEFS)
For each random sub-sampling based learning component, ODEFS unifies feature selection and outlier detection into a pairwise ranking formulation.
We adopt the thresholded self-paced learning to simultaneously optimize feature selection and example selection.
arXiv Detail & Related papers (2020-01-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.