Doubly Stochastic Subspace Clustering
- URL: http://arxiv.org/abs/2011.14859v2
- Date: Mon, 19 Apr 2021 23:50:41 GMT
- Title: Doubly Stochastic Subspace Clustering
- Authors: Derek Lim, Ren\'e Vidal, Benjamin D. Haeffele
- Abstract summary: Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity.
In this work, we learn both a self-expressive representation of the data and an affinity matrix that is well-normalized for spectral clustering.
Experiments show that our method achieves state-of-the-art subspace clustering performance on many common datasets in computer vision.
- Score: 9.815735805354905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many state-of-the-art subspace clustering methods follow a two-step process
by first constructing an affinity matrix between data points and then applying
spectral clustering to this affinity. Most of the research into these methods
focuses on the first step of generating the affinity, which often exploits the
self-expressive property of linear subspaces, with little consideration
typically given to the spectral clustering step that produces the final
clustering. Moreover, existing methods often obtain the final affinity that is
used in the spectral clustering step by applying ad-hoc or arbitrarily chosen
postprocessing steps to the affinity generated by a self-expressive clustering
formulation, which can have a significant impact on the overall clustering
performance. In this work, we unify these two steps by learning both a
self-expressive representation of the data and an affinity matrix that is
well-normalized for spectral clustering. In our proposed models, we constrain
the affinity matrix to be doubly stochastic, which results in a principled
method for affinity matrix normalization while also exploiting known benefits
of doubly stochastic normalization in spectral clustering. We develop a general
framework and derive two models: one that jointly learns the self-expressive
representation along with the doubly stochastic affinity, and one that
sequentially solves for one then the other. Furthermore, we leverage sparsity
in the problem to develop a fast active-set method for the sequential solver
that enables efficient computation on large datasets. Experiments show that our
method achieves state-of-the-art subspace clustering performance on many common
datasets in computer vision.
Related papers
- Uniform tensor clustering by jointly exploring sample affinities of
various orders [37.11798745294855]
We propose a unified tensor clustering method (UTC) that characterizes sample proximity using multiple samples' affinity.
UTC is affirmed to enhance clustering by exploiting different order affinities when processing high-dimensional data.
arXiv Detail & Related papers (2023-02-03T06:43:08Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - Semi-Supervised Clustering via Dynamic Graph Structure Learning [12.687613487964088]
Most existing semi-supervised graph-based clustering methods exploit the supervisory information by refining the affinity matrix or constraining the low-dimensional representations of data points.
We propose a novel dynamic graph learning method for semi-supervised graph clustering.
arXiv Detail & Related papers (2022-09-06T14:05:31Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Spatially Coherent Clustering Based on Orthogonal Nonnegative Matrix
Factorization [0.0]
We introduce in this work clustering models based on a total variation (TV) regularization procedure on the cluster membership matrix.
We provide a numerical evaluation of all proposed methods on a hyperspectral dataset obtained from a matrix-assisted laser desorption/ionisation imaging measurement.
arXiv Detail & Related papers (2021-04-25T23:40:41Z) - Clustering Ensemble Meets Low-rank Tensor Approximation [50.21581880045667]
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one.
We propose a novel low-rank tensor approximation-based method to solve the problem from a global perspective.
Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods.
arXiv Detail & Related papers (2020-12-16T13:01:37Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z) - Conjoined Dirichlet Process [63.89763375457853]
We develop a novel, non-parametric probabilistic biclustering method based on Dirichlet processes to identify biclusters with strong co-occurrence in both rows and columns.
We apply our method to two different applications, text mining and gene expression analysis, and demonstrate that our method improves bicluster extraction in many settings compared to existing approaches.
arXiv Detail & Related papers (2020-02-08T19:41:23Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.