Information-Theoretic Limits and Strong Consistency on Binary Non-uniform Hypergraph Stochastic Block Models
- URL: http://arxiv.org/abs/2306.06845v2
- Date: Thu, 19 Dec 2024 08:23:57 GMT
- Title: Information-Theoretic Limits and Strong Consistency on Binary Non-uniform Hypergraph Stochastic Block Models
- Authors: Hai-Xiao Wang,
- Abstract summary: unsupervised classification problem in random hypergraphs under the non-uniform Hypergraph the Block Model (HSBM)
In this paper, the information-theoretic limits on the clustering accuracy and the strong consistency threshold are established.
- Score: 0.0
- License:
- Abstract: Consider the unsupervised classification problem in random hypergraphs under the non-uniform Hypergraph Stochastic Block Model (HSBM) with two equal-sized communities, where each edge appears independently with some probability depending only on the labels of its vertices. In this paper, the information-theoretic limits on the clustering accuracy and the strong consistency threshold are established, expressed in terms of the generalized Hellinger distance. Below the threshold, it is impossible to assign all vertices to their own communities, and the lower bound of the expected mismatch ratio is derived. On the other hand, the problem space is (sometimes) divided into two disjoint subspaces when above the threshold. When only the contracted adjacency matrix is given, with high probability, one-stage spectral algorithms succeed in assigning every vertex correctly in the subspace far away from the threshold but fail in the other one. Two subsequent refinement algorithms are proposed to improve the clustering accuracy, which attain the lowest possible mismatch ratio, previously derived from the information-theoretical perspective. The failure of spectral algorithms in the second subspace arises from the loss of information induced by tensor contraction. The origin of this loss and possible solutions to minimize the impact are presented. Moreover, different from uniform hypergraphs, strong consistency is achievable by aggregating information from all uniform layers, even if it is impossible when each layer is considered alone.
Related papers
- One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering [56.81492360414741]
We show how to enforce a one-step normalized cut for bipartite graphs, especially with linear-time complexity.
In this paper, we first characterize a novel one-step bipartite graph cut criterion with normalized constraints, and theoretically prove its equivalence to a trace problem.
We extend this cut criterion to a scalable subspace clustering approach, where adaptive anchor learning, bipartite graph learning, and one-step normalized bipartite graph partitioning are simultaneously modeled.
arXiv Detail & Related papers (2023-05-12T11:27:20Z) - Optimal and exact recovery on general non-uniform Hypergraph Stochastic Block Model [0.0]
We consider the community detection problem in random hypergraphs under the non-uniform hypergraphinger block model (HSBM)
We establish, for the first time in the literature, a sharp threshold for exact recovery under this non-uniform case, subject to minor constraints.
We provide two efficient algorithms which successfully achieve exact recovery when above the threshold, and attain the lowest possible ratio when the exact recovery is impossible.
arXiv Detail & Related papers (2023-04-25T20:30:33Z) - Semi-Supervised Clustering of Sparse Graphs: Crossing the
Information-Theoretic Threshold [3.6052935394000234]
Block model is a canonical random graph model for clustering and community detection on network-structured data.
No estimator based on the network topology can perform substantially better than chance on sparse graphs if the model parameter is below a certain threshold.
We prove that with an arbitrary fraction of the labels feasible throughout the parameter domain.
arXiv Detail & Related papers (2022-05-24T00:03:25Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Geometric Graph Representation Learning via Maximizing Rate Reduction [73.6044873825311]
Learning node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
We propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner.
G2R maps nodes in distinct groups into different subspaces, while each subspace is compact and different subspaces are dispersed.
arXiv Detail & Related papers (2022-02-13T07:46:24Z) - Partial recovery and weak consistency in the non-uniform hypergraph Stochastic Block Model [6.681901523019242]
We consider the community detection problem in random hypergraphs under the nonuniform hypergraph block model (HSBM)
We provide a spectral algorithm that outputs a partition with at least a $gamma$ fraction of the vertices classified correctly, where $gammain depends on the signal-to-noise ratio (SNR) of the model.
The theoretical analysis of our algorithm relies on the concentration and regularization of the adjacency matrix for non-uniform random hypergraphs, which can be of independent interest.
arXiv Detail & Related papers (2021-12-22T05:38:33Z) - Exact Recovery in the General Hypergraph Stochastic Block Model [92.28929858529679]
This paper investigates fundamental limits of exact recovery in the general d-uniform hypergraph block model (d-HSBM)
We show that there exists a sharp threshold such that exact recovery is achievable above the threshold and impossible below it.
arXiv Detail & Related papers (2021-05-11T03:39:08Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins [92.7662890047311]
We show that gradient descent finds halfspaces with classification error $tilde O(mathsfOPT1/2) + varepsilon$ in $mathrmpoly(d,1/varepsilon)$ time and sample complexity.
arXiv Detail & Related papers (2020-10-01T16:48:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.