Convex Subspace Clustering by Adaptive Block Diagonal Representation
- URL: http://arxiv.org/abs/2009.09386v3
- Date: Mon, 9 May 2022 03:56:37 GMT
- Title: Convex Subspace Clustering by Adaptive Block Diagonal Representation
- Authors: Yunxia Lin, Songcan Chen
- Abstract summary: Subspace clustering is a class of extensively studied clustering methods.
Its key first step is to desire learning a representation coefficient matrix with block diagonal structure.
We propose Adaptive Block Diagonal Representation (ABDR) which explicitly pursues block diagonalty without sacrificing the convexity of the indirect one.
- Score: 30.709797128259236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subspace clustering is a class of extensively studied clustering methods
where the spectral-type approaches are its important subclass. Its key first
step is to desire learning a representation coefficient matrix with block
diagonal structure. To realize this step, many methods were successively
proposed by imposing different structure priors on the coefficient matrix.
These impositions can be roughly divided into two categories, i.e., indirect
and direct. The former introduces the priors such as sparsity and low rankness
to indirectly or implicitly learn the block diagonal structure. However, the
desired block diagonalty cannot necessarily be guaranteed for noisy data. While
the latter directly or explicitly imposes the block diagonal structure prior
such as block diagonal representation (BDR) to ensure so-desired block
diagonalty even if the data is noisy but at the expense of losing the convexity
that the former's objective possesses. For compensating their respective
shortcomings, in this paper, we follow the direct line to propose Adaptive
Block Diagonal Representation (ABDR) which explicitly pursues block diagonalty
without sacrificing the convexity of the indirect one. Specifically, inspired
by Convex BiClustering, ABDR coercively fuses both columns and rows of the
coefficient matrix via a specially designed convex regularizer, thus naturally
enjoying their merits and adaptively obtaining the number of blocks. Finally,
experimental results on synthetic and real benchmarks demonstrate the
superiority of ABDR to the state-of-the-arts (SOTAs).
Related papers
- Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Fast and Robust Sparsity-Aware Block Diagonal Representation [13.167450470598045]
The block diagonal structure of an affinity matrix represents clusters of feature vectors by non-zero coefficients that are concentrated in blocks.
We propose a Fast and Robust Sparsity-Aware Block Diagonal Representation (FRS-BDR) method, which jointly estimates cluster memberships and the number of blocks.
Experiments on a variety of real-world applications demonstrate the robustness of FRS-BDR in terms of clustering accuracy, against corrupted features, time and cluster enumeration performance.
arXiv Detail & Related papers (2023-12-02T13:44:27Z) - Learning idempotent representation for subspace clustering [7.6275971668447]
An ideal reconstruction coefficient matrix should have two properties: 1) it is block diagonal with each block indicating a subspace; 2) each block is fully connected.
We devise an idempotent representation (IDR) algorithm to pursue reconstruction coefficient matrices approximating normalized membership matrices.
Experiments conducted on both synthetic and real world datasets prove that IDR is an effective and efficient subspace clustering algorithm.
arXiv Detail & Related papers (2022-07-29T01:39:25Z) - Generalized Leverage Scores: Geometric Interpretation and Applications [15.86621510551207]
We extend the definition of leverage scores to relate the columns of a matrix to arbitrary subsets of singular vectors.
We employ this result to design approximation algorithms with provable guarantees for two well-known problems.
arXiv Detail & Related papers (2022-06-16T10:14:08Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Orthogonalizing Convolutional Layers with the Cayley Transform [83.73855414030646]
We propose and evaluate an alternative approach to parameterize convolutional layers that are constrained to be orthogonal.
We show that our method indeed preserves orthogonality to a high degree even for large convolutions.
arXiv Detail & Related papers (2021-04-14T23:54:55Z) - A Two Stage Generalized Block Orthogonal Matching Pursuit (TSGBOMP)
Algorithm [0.3867363075280543]
Recovery of an unknown sparse signal from a few of its projections is the key objective of compressed sensing.
Existing block sparse recovery algorithms like BOMP make the assumption of uniform block size and known block boundaries.
This paper proposes a two step procedure, where the first stage is a coarse block location identification stage.
The second stage carries out finer localization of a non-zero cluster within the window selected in the first stage.
arXiv Detail & Related papers (2020-08-18T17:00:55Z) - Fused-Lasso Regularized Cholesky Factors of Large Nonstationary
Covariance Matrices of Longitudinal Data [0.0]
smoothness of the subdiagonals of the Cholesky factor of large covariance matrices is closely related to the degrees of nonstationarity of autoregressive models for time series and longitudinal data.
We propose an algorithm for sparse estimation of the Cholesky factor which decouple row-wise.
arXiv Detail & Related papers (2020-07-22T02:38:16Z) - Selective Inference for Latent Block Models [50.83356836818667]
This study provides a selective inference method for latent block models.
We construct a statistical test on a set of row and column cluster memberships of a latent block model.
The proposed exact and approximated tests work effectively, compared to the naive test that did not take the selective bias into account.
arXiv Detail & Related papers (2020-05-27T10:44:19Z) - Holistically-Attracted Wireframe Parsing [123.58263152571952]
This paper presents a fast and parsimonious parsing method to detect a vectorized wireframe in an input image with a single forward pass.
The proposed method is end-to-end trainable, consisting of three components: (i) line segment and junction proposal generation, (ii) line segment and junction matching, and (iii) line segment and junction verification.
arXiv Detail & Related papers (2020-03-03T17:43:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.