Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix
- URL: http://arxiv.org/abs/2008.13539v1
- Date: Mon, 31 Aug 2020 12:28:40 GMT
- Title: Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix
- Authors: Weixuan Liang and Sihang Zhou and Jian Xiong and Xinwang Liu and Siwei
Wang and En Zhu and Zhiping Cai and Xin Xu
- Abstract summary: Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
- Score: 57.11971786407279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view spectral clustering can effectively reveal the intrinsic cluster
structure among data by performing clustering on the learned optimal embedding
across views. Though demonstrating promising performance in various
applications, most of existing methods usually linearly combine a group of
pre-specified first-order Laplacian matrices to construct the optimal Laplacian
matrix, which may result in limited representation capability and insufficient
information exploitation. Also, storing and implementing complex operations on
the $n\times n$ Laplacian matrices incurs intensive storage and computation
complexity. To address these issues, this paper first proposes a multi-view
spectral clustering algorithm that learns a high-order optimal neighborhood
Laplacian matrix, and then extends it to the late fusion version for accurate
and efficient multi-view clustering. Specifically, our proposed algorithm
generates the optimal Laplacian matrix by searching the neighborhood of the
linear combination of both the first-order and high-order base Laplacian
matrices simultaneously. By this way, the representative capacity of the
learned optimal Laplacian matrix is enhanced, which is helpful to better
utilize the hidden high-order connection information among data, leading to
improved clustering performance. We design an efficient algorithm with proved
convergence to solve the resultant optimization problem. Extensive experimental
results on nine datasets demonstrate the superiority of our algorithm against
state-of-the-art methods, which verifies the effectiveness and advantages of
the proposed algorithm.
Related papers
- Scalable Co-Clustering for Large-Scale Data through Dynamic Partitioning and Hierarchical Merging [7.106620444966807]
Co-clustering simultaneously clusters rows and columns, revealing more fine-grained groups.
Existing co-clustering methods suffer from poor scalability and cannot handle large-scale data.
This paper presents a novel and scalable co-clustering method designed to uncover intricate patterns in high-dimensional, large-scale datasets.
arXiv Detail & Related papers (2024-10-09T04:47:22Z) - One-Step Late Fusion Multi-view Clustering with Compressed Subspace [29.02032034647922]
We propose an integrated framework named One-Step Late Fusion Multi-view Clustering with Compressed Subspace (OS-LFMVC-CS)
We use the consensus subspace to align the partition matrix while optimizing the partition fusion, and utilize the fused partition matrix to guide the learning of discrete labels.
arXiv Detail & Related papers (2024-01-03T06:18:30Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - Ensemble Clustering via Co-association Matrix Self-enhancement [16.928049559092454]
Ensemble clustering integrates a set of base clustering results to generate a stronger one.
Existing methods usually rely on a co-association (CA) matrix that measures how many times two samples are grouped into the same cluster.
We propose a simple yet effective CA matrix self-enhancement framework that can improve the CA matrix to achieve better clustering performance.
arXiv Detail & Related papers (2022-05-12T07:54:32Z) - Multi-view Clustering via Deep Matrix Factorization and Partition
Alignment [43.56334737599984]
We propose a novel multi-view clustering algorithm via deep matrix decomposition and partition alignment.
An alternating optimization algorithm is developed to solve the optimization problem with proven convergence.
arXiv Detail & Related papers (2021-05-01T15:06:57Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Clustering Ensemble Meets Low-rank Tensor Approximation [50.21581880045667]
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one.
We propose a novel low-rank tensor approximation-based method to solve the problem from a global perspective.
Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods.
arXiv Detail & Related papers (2020-12-16T13:01:37Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.