Learning idempotent representation for subspace clustering
- URL: http://arxiv.org/abs/2207.14431v1
- Date: Fri, 29 Jul 2022 01:39:25 GMT
- Title: Learning idempotent representation for subspace clustering
- Authors: Lai Wei, Shiteng Liu, Rigui Zhou and Changming Zhu
- Abstract summary: An ideal reconstruction coefficient matrix should have two properties: 1) it is block diagonal with each block indicating a subspace; 2) each block is fully connected.
We devise an idempotent representation (IDR) algorithm to pursue reconstruction coefficient matrices approximating normalized membership matrices.
Experiments conducted on both synthetic and real world datasets prove that IDR is an effective and efficient subspace clustering algorithm.
- Score: 7.6275971668447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The critical point for the successes of spectral-type subspace clustering
algorithms is to seek reconstruction coefficient matrices which can faithfully
reveal the subspace structures of data sets. An ideal reconstruction
coefficient matrix should have two properties: 1) it is block diagonal with
each block indicating a subspace; 2) each block is fully connected. Though
there are various spectral-type subspace clustering algorithms have been
proposed, some defects still exist in the reconstruction coefficient matrices
constructed by these algorithms. We find that a normalized membership matrix
naturally satisfies the above two conditions. Therefore, in this paper, we
devise an idempotent representation (IDR) algorithm to pursue reconstruction
coefficient matrices approximating normalized membership matrices. IDR designs
a new idempotent constraint for reconstruction coefficient matrices. And by
combining the doubly stochastic constraints, the coefficient matrices which are
closed to normalized membership matrices could be directly achieved. We present
the optimization algorithm for solving IDR problem and analyze its computation
burden as well as convergence. The comparisons between IDR and related
algorithms show the superiority of IDR. Plentiful experiments conducted on both
synthetic and real world datasets prove that IDR is an effective and efficient
subspace clustering algorithm.
Related papers
- Fitting Multilevel Factor Models [41.38783926370621]
We develop a novel, fast implementation of the expectation-maximization algorithm, tailored for multilevel factor models.
We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors.
We present an algorithm that computes the Cholesky factorization of an expanded matrix with linear time and space complexities.
arXiv Detail & Related papers (2024-09-18T15:39:12Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model [13.082805815235975]
We introduce a novel framework called Mode-wise Principal Subspace Pursuit (MOP-UP) to extract hidden variations in both the row and column dimensions for matrix data.
The effectiveness and practical merits of the proposed framework are demonstrated through experiments on both simulated and real datasets.
arXiv Detail & Related papers (2023-07-02T13:59:47Z) - Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares [0.8702432681310401]
We propose a new algorithm for recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations.
We show that the IRLS method favorable in identifying low/comckuele state measurements.
arXiv Detail & Related papers (2023-06-08T06:35:47Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Clustering of Nonnegative Data and an Application to Matrix Completion [0.0]
We analyze its performance in relation to a certain measure of correlation between said subspaces.
We use our clustering algorithm to develop a matrix completion algorithm which can outperform standard matrix completion algorithms on data matrices satisfying certain natural conditions.
arXiv Detail & Related papers (2020-09-02T18:24:47Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.