Semi-Supervised Clustering via Dynamic Graph Structure Learning
- URL: http://arxiv.org/abs/2209.02513v1
- Date: Tue, 6 Sep 2022 14:05:31 GMT
- Title: Semi-Supervised Clustering via Dynamic Graph Structure Learning
- Authors: Huaming Ling, Chenglong Bao, Xin Liang, and Zuoqiang Shi
- Abstract summary: Most existing semi-supervised graph-based clustering methods exploit the supervisory information by refining the affinity matrix or constraining the low-dimensional representations of data points.
We propose a novel dynamic graph learning method for semi-supervised graph clustering.
- Score: 12.687613487964088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing semi-supervised graph-based clustering methods exploit the
supervisory information by either refining the affinity matrix or directly
constraining the low-dimensional representations of data points. The affinity
matrix represents the graph structure and is vital to the performance of
semi-supervised graph-based clustering. However, existing methods adopt a
static affinity matrix to learn the low-dimensional representations of data
points and do not optimize the affinity matrix during the learning process. In
this paper, we propose a novel dynamic graph structure learning method for
semi-supervised clustering. In this method, we simultaneously optimize the
affinity matrix and the low-dimensional representations of data points by
leveraging the given pairwise constraints. Moreover, we propose an alternating
minimization approach with proven convergence to solve the proposed nonconvex
model. During the iteration process, our method cyclically updates the
low-dimensional representations of data points and refines the affinity matrix,
leading to a dynamic affinity matrix (graph structure). Specifically, for the
update of the affinity matrix, we enforce the data points with remarkably
different low-dimensional representations to have an affinity value of 0.
Furthermore, we construct the initial affinity matrix by integrating the local
distance and global self-representation among data points. Experimental results
on eight benchmark datasets under different settings show the advantages of the
proposed approach.
Related papers
- Accelerated structured matrix factorization [0.0]
Matrix factorization exploits the idea that, in complex high-dimensional data, the actual signal typically lies in lower-dimensional structures.
By exploiting Bayesian shrinkage priors, we devise a computationally convenient approach for high-dimensional matrix factorization.
The dependence between row and column entities is modeled by inducing flexible sparse patterns within factors.
arXiv Detail & Related papers (2022-12-13T11:35:01Z) - Non-Negative Matrix Factorization with Scale Data Structure Preservation [23.31865419578237]
The model described in this paper belongs to the family of non-negative matrix factorization methods designed for data representation and dimension reduction.
The idea is to add, to the NMF cost function, a penalty term to impose a scale relationship between the pairwise similarity matrices of the original and transformed data points.
The proposed clustering algorithm is compared to some existing NMF-based algorithms and to some manifold learning-based algorithms when applied to some real-life datasets.
arXiv Detail & Related papers (2022-09-22T09:32:18Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Graph Constrained Data Representation Learning for Human Motion
Segmentation [14.611777974037194]
We propose a novel unsupervised model that learns a representation of the data and digs clustering information from the data itself.
Experimental results on four benchmark datasets for HMS demonstrate that our approach achieves significantly better clustering performance then state-of-the-art methods.
arXiv Detail & Related papers (2021-07-28T13:49:16Z) - Clustering Ensemble Meets Low-rank Tensor Approximation [50.21581880045667]
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one.
We propose a novel low-rank tensor approximation-based method to solve the problem from a global perspective.
Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods.
arXiv Detail & Related papers (2020-12-16T13:01:37Z) - Doubly Stochastic Subspace Clustering [9.815735805354905]
Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity.
In this work, we learn both a self-expressive representation of the data and an affinity matrix that is well-normalized for spectral clustering.
Experiments show that our method achieves state-of-the-art subspace clustering performance on many common datasets in computer vision.
arXiv Detail & Related papers (2020-11-30T14:56:54Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.