Joint Learning of Unsupervised Multi-view Feature and Instance Co-selection with Cross-view Imputation
- URL: http://arxiv.org/abs/2512.15574v1
- Date: Wed, 17 Dec 2025 16:29:48 GMT
- Title: Joint Learning of Unsupervised Multi-view Feature and Instance Co-selection with Cross-view Imputation
- Authors: Yuxin Cai, Yanyong Huang, Jinyuan Chang, Dongjie Wang, Tianrui Li, Xiaoyi Jiang,
- Abstract summary: We propose a novel co-selection method, termed Joint learning of Unsupervised mult-view feature and instance Co-selection with cross-viEw imputation.<n> JUICE first reconstructs incomplete multi-view data using available observations, bringing missing data recovery and feature and instance co-selection together in a unified framework.
- Score: 17.24617460579791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature and instance co-selection, which aims to reduce both feature dimensionality and sample size by identifying the most informative features and instances, has attracted considerable attention in recent years. However, when dealing with unlabeled incomplete multi-view data, where some samples are missing in certain views, existing methods typically first impute the missing data and then concatenate all views into a single dataset for subsequent co-selection. Such a strategy treats co-selection and missing data imputation as two independent processes, overlooking potential interactions between them. The inter-sample relationships gleaned from co-selection can aid imputation, which in turn enhances co-selection performance. Additionally, simply merging multi-view data fails to capture the complementary information among views, ultimately limiting co-selection effectiveness. To address these issues, we propose a novel co-selection method, termed Joint learning of Unsupervised multI-view feature and instance Co-selection with cross-viEw imputation (JUICE). JUICE first reconstructs incomplete multi-view data using available observations, bringing missing data recovery and feature and instance co-selection together in a unified framework. Then, JUICE leverages cross-view neighborhood information to learn inter-sample relationships and further refine the imputation of missing values during reconstruction. This enables the selection of more representative features and instances. Extensive experiments demonstrate that JUICE outperforms state-of-the-art methods.
Related papers
- Cross-view Joint Learning for Mixed-Missing Multi-view Unsupervised Feature Selection [24.037106656954666]
We propose CLIM-FS, a novel IMUFS method designed to address the mixed-missing problem.<n>We integrate the imputation of both missing views and variables into a feature selection model based on nonnegative matrix factorization.<n>We fully leverage consensus cluster structure and cross-view local geometrical structure to enhance the synergistic learning process.
arXiv Detail & Related papers (2025-11-15T15:34:52Z) - A Semi-supervised Generative Model for Incomplete Multi-view Data Integration with Missing Labels [12.79532395630597]
We propose a semi-supervised generative model that utilizes both labeled and unlabeled samples in a unified framework.<n>Compared to existing approaches, our model achieves better predictive and imputation performance on both image and multi-omics data with missing views and limited labeled samples.
arXiv Detail & Related papers (2025-08-15T03:10:18Z) - Uncertainty-Aware Global-View Reconstruction for Multi-View Multi-Label Feature Selection [4.176139684578661]
We propose a unified model constructed from the perspective of global-view reconstruction.<n>We incorporate the perception of sample uncertainty during the reconstruction process to enhance trustworthiness.<n> Experimental results demonstrate the superior performance of our method on multi-view datasets.
arXiv Detail & Related papers (2025-03-18T08:35:39Z) - CONDEN-FI: Consistency and Diversity Learning-based Multi-View Unsupervised Feature and In-stance Co-Selection [8.985835077643953]
We propose a CONsistency and DivErsity learNing-based multi-view unsupervised Feature and Instance co-selection (CONDEN-FI)<n>CONDEN-FI reconstructs mul-ti-view data from both the sample and feature spaces to learn representations that are consistent across views and specific to each view.<n>An efficient algorithm is developed to solve the resultant optimization problem.
arXiv Detail & Related papers (2024-12-09T15:24:11Z) - Unified View Imputation and Feature Selection Learning for Incomplete
Multi-view Data [13.079847265195127]
Multi-view unsupervised feature selection (MUFS) is an effective technology for reducing dimensionality in machine learning.
Existing methods cannot directly deal with incomplete multi-view data where some samples are missing in certain views.
UNIFIER explores the local structure of multi-view data by adaptively learning similarity-induced graphs from both the sample and feature spaces.
arXiv Detail & Related papers (2024-01-19T08:26:44Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Deep Incomplete Multi-view Clustering with Cross-view Partial Sample and
Prototype Alignment [50.82982601256481]
We propose a Cross-view Partial Sample and Prototype Alignment Network (CPSPAN) for Deep Incomplete Multi-view Clustering.
Unlike existing contrastive-based methods, we adopt pair-observed data alignment as 'proxy supervised signals' to guide instance-to-instance correspondence construction.
arXiv Detail & Related papers (2023-03-28T02:31:57Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Auto-weighted Multi-view Feature Selection with Graph Optimization [90.26124046530319]
We propose a novel unsupervised multi-view feature selection model based on graph learning.
The contributions are threefold: (1) during the feature selection procedure, the consensus similarity graph shared by different views is learned.
Experiments on various datasets demonstrate the superiority of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T03:25:25Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z) - Improving Multi-Turn Response Selection Models with Complementary
Last-Utterance Selection by Instance Weighting [84.9716460244444]
We consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals.
We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets.
arXiv Detail & Related papers (2020-02-18T06:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.