One for all: A novel Dual-space Co-training baseline for Large-scale
Multi-View Clustering
- URL: http://arxiv.org/abs/2401.15691v1
- Date: Sun, 28 Jan 2024 16:30:13 GMT
- Title: One for all: A novel Dual-space Co-training baseline for Large-scale
Multi-View Clustering
- Authors: Zisen Kong, Zhiqiang Fu, Dongxia Chang, Yiming Wang, Yao Zhao
- Abstract summary: We propose a novel multi-view clustering model, named Dual-space Co-training Large-scale Multi-view Clustering (DSCMC)
The main objective of our approach is to enhance the clustering performance by leveraging co-training in two distinct spaces.
Our algorithm has an approximate linear computational complexity, which guarantees its successful application on large-scale datasets.
- Score: 42.92751228313385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel multi-view clustering model, named
Dual-space Co-training Large-scale Multi-view Clustering (DSCMC). The main
objective of our approach is to enhance the clustering performance by
leveraging co-training in two distinct spaces. In the original space, we learn
a projection matrix to obtain latent consistent anchor graphs from different
views. This process involves capturing the inherent relationships and
structures between data points within each view. Concurrently, we employ a
feature transformation matrix to map samples from various views to a shared
latent space. This transformation facilitates the alignment of information from
multiple views, enabling a comprehensive understanding of the underlying data
distribution. We jointly optimize the construction of the latent consistent
anchor graph and the feature transformation to generate a discriminative anchor
graph. This anchor graph effectively captures the essential characteristics of
the multi-view data and serves as a reliable basis for subsequent clustering
analysis. Moreover, the element-wise method is proposed to avoid the impact of
diverse information between different views. Our algorithm has an approximate
linear computational complexity, which guarantees its successful application on
large-scale datasets. Through experimental validation, we demonstrate that our
method significantly reduces computational complexity while yielding superior
clustering performance compared to existing approaches.
Related papers
- Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering [13.638434337947302]
FSSMSC is a novel solution to the high computational complexity commonly found in existing approaches.
The method generates a consensus anchor graph across all views, representing each data point as a sparse linear combination of chosen landmarks.
The effectiveness and efficiency of FSSMSC are validated through extensive experiments on multiple benchmark datasets of varying scales.
arXiv Detail & Related papers (2024-08-11T06:54:00Z) - One-step Multi-view Clustering with Diverse Representation [47.41455937479201]
We propose a one-step multi-view clustering with diverse representation method, which incorporates multi-view learning and $k$-means into a unified framework.
We develop an efficient optimization algorithm with proven convergence to solve the resultant problem.
arXiv Detail & Related papers (2023-06-08T02:52:24Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - Subspace-Contrastive Multi-View Clustering [0.0]
We propose a novel Subspace-Contrastive Multi-View Clustering (SCMC) approach.
We employ view-specific auto-encoders to map the original multi-view data into compact features perceiving its nonlinear structures.
To demonstrate the effectiveness of the proposed model, we conduct a large number of comparative experiments on eight challenge datasets.
arXiv Detail & Related papers (2022-10-13T07:19:37Z) - Adaptively-weighted Integral Space for Fast Multiview Clustering [54.177846260063966]
We propose an Adaptively-weighted Integral Space for Fast Multiview Clustering (AIMC) with nearly linear complexity.
Specifically, view generation models are designed to reconstruct the view observations from the latent integral space.
Experiments conducted on several realworld datasets confirm the superiority of the proposed AIMC method.
arXiv Detail & Related papers (2022-08-25T05:47:39Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z) - Consistent and Complementary Graph Regularized Multi-view Subspace
Clustering [31.187031653119025]
This study investigates the problem of multi-view clustering, where multiple views contain consistent information and each view also includes complementary information.
We propose a method that involves consistent and complementary graph-regularized multi-view subspace clustering (GRMSC)
The objective function is optimized by the augmented Lagrangian multiplier method in order to achieve multi-view clustering.
arXiv Detail & Related papers (2020-04-07T03:48:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.