Higher-Order Spectral Clustering for Geometric Graphs
- URL: http://arxiv.org/abs/2009.11353v2
- Date: Mon, 15 Mar 2021 16:57:43 GMT
- Title: Higher-Order Spectral Clustering for Geometric Graphs
- Authors: Konstantin Avrachenkov, Andrei Bobu, Maximilien Dreveton
- Abstract summary: We present an effective generalization, which we call higher-order spectral clustering.
It resembles in concept the classical spectral clustering method but uses for partitioning the eigenvector associated with a higher-order eigenvalue.
We show that our method is effective in numerical experiments even for graphs of modest size.
- Score: 0.17188280334580194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The present paper is devoted to clustering geometric graphs. While the
standard spectral clustering is often not effective for geometric graphs, we
present an effective generalization, which we call higher-order spectral
clustering. It resembles in concept the classical spectral clustering method
but uses for partitioning the eigenvector associated with a higher-order
eigenvalue. We establish the weak consistency of this algorithm for a wide
class of geometric graphs which we call Soft Geometric Block Model. A small
adjustment of the algorithm provides strong consistency. We also show that our
method is effective in numerical experiments even for graphs of modest size.
Related papers
- HeNCler: Node Clustering in Heterophilous Graphs through Learned Asymmetric Similarity [55.27586970082595]
HeNCler is a novel approach for Heterophilous Node Clustering.
We show that HeNCler significantly enhances performance in node clustering tasks within heterophilous graph contexts.
arXiv Detail & Related papers (2024-05-27T11:04:05Z) - LSEnet: Lorentz Structural Entropy Neural Network for Deep Graph Clustering [59.89626219328127]
Graph clustering is a fundamental problem in machine learning.
Deep learning methods achieve the state-of-the-art results in recent years, but they still cannot work without predefined cluster numbers.
We propose to address this problem from a fresh perspective of graph information theory.
arXiv Detail & Related papers (2024-05-20T05:46:41Z) - MeanCut: A Greedy-Optimized Graph Clustering via Path-based Similarity
and Degree Descent Criterion [0.6906005491572401]
spectral clustering is popular and attractive due to the remarkable performance, easy implementation, and strong adaptability.
We propose MeanCut as the objective function and greedily optimize it in degree descending order for a nondestructive graph partition.
The validity of our algorithm is demonstrated by testifying on real-world benchmarks and application of face recognition.
arXiv Detail & Related papers (2023-12-07T06:19:39Z) - Latent Random Steps as Relaxations of Max-Cut, Min-Cut, and More [30.919536115917726]
We present a probabilistic model based on non-negative matrix factorization which unifies clustering and simplification.
By relaxing the hard clustering to a soft clustering, our algorithm relaxes potentially hard clustering problems to a tractable ones.
arXiv Detail & Related papers (2023-08-12T02:47:57Z) - Spectral Clustering on Large Datasets: When Does it Work? Theory from
Continuous Clustering and Density Cheeger-Buser [0.17188280334580194]
In modern machine learning, many data sets are modeled as a large number of points drawn from a probability density function.
Our work suggests that Shi-Malik spectral clustering works well on data drawn from mixtures of Laplace distributions.
Our key tool is a new Cheeger-Buser inequality for all probability densities, including discontinuous ones.
arXiv Detail & Related papers (2023-05-11T03:08:49Z) - Contrastive Graph Clustering in Curvature Spaces [74.03252813800334]
We present a novel end-to-end contrastive graph clustering model named CONGREGATE.
To support geometric clustering, we construct a theoretically grounded Heterogeneous Curvature Space.
We then train the graph clusters by an augmentation-free reweighted contrastive approach.
arXiv Detail & Related papers (2023-05-05T14:04:52Z) - A parameter-free graph reduction for spectral clustering and SpectralNet [1.5469452301122175]
We introduce a graph reduction method that does not require any parameters.
First, distances from every point $p$ to its neighbors are filtered using an adaptive threshold to only keep neighbors with similar surrounding density.
The edges that survive two steps form the constructed graph that was passed to spectral clustering SpectralNet.
arXiv Detail & Related papers (2023-02-25T21:19:30Z) - Geometry Contrastive Learning on Heterogeneous Graphs [50.58523799455101]
This paper proposes a novel self-supervised learning method, termed as Geometry Contrastive Learning (GCL)
GCL views a heterogeneous graph from Euclidean and hyperbolic perspective simultaneously, aiming to make a strong merger of the ability of modeling rich semantics and complex structures.
Extensive experiments on four benchmarks data sets show that the proposed approach outperforms the strong baselines.
arXiv Detail & Related papers (2022-06-25T03:54:53Z) - Template-Based Graph Clustering [5.4352987210173955]
We propose a novel graph clustering method guided by additional information on the underlying structure of the clusters (or communities)
With relevant priors that encode the density of the clusters and their relationships, our method outperforms classical methods, especially for challenging cases.
arXiv Detail & Related papers (2021-07-05T13:13:34Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.