AH-UGC: Adaptive and Heterogeneous-Universal Graph Coarsening
- URL: http://arxiv.org/abs/2505.15842v1
- Date: Sun, 18 May 2025 09:57:33 GMT
- Title: AH-UGC: Adaptive and Heterogeneous-Universal Graph Coarsening
- Authors: Mohit Kataria, Shreyash Bhilwade, Sandeep Kumar, Jayadeva,
- Abstract summary: We introduce a novel framework that combines Locality Sensitive Hashing (LSH) with Consistent Hashing to enable $textitadaptive graph coarsening$.<n>Our approach is the first unified framework to support both adaptive and heterogeneous coarsening.
- Score: 4.831609704970507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: $\textbf{Graph Coarsening (GC)}$ is a prominent graph reduction technique that compresses large graphs to enable efficient learning and inference. However, existing GC methods generate only one coarsened graph per run and must recompute from scratch for each new coarsening ratio, resulting in unnecessary overhead. Moreover, most prior approaches are tailored to $\textit{homogeneous}$ graphs and fail to accommodate the semantic constraints of $\textit{heterogeneous}$ graphs, which comprise multiple node and edge types. To overcome these limitations, we introduce a novel framework that combines Locality Sensitive Hashing (LSH) with Consistent Hashing to enable $\textit{adaptive graph coarsening}$. Leveraging hashing techniques, our method is inherently fast and scalable. For heterogeneous graphs, we propose a $\textit{type isolated coarsening}$ strategy that ensures semantic consistency by restricting merges to nodes of the same type. Our approach is the first unified framework to support both adaptive and heterogeneous coarsening. Extensive evaluations on 23 real-world datasets including homophilic, heterophilic, homogeneous, and heterogeneous graphs demonstrate that our method achieves superior scalability while preserving the structural and semantic integrity of the original graph.
Related papers
- Training-free Heterogeneous Graph Condensation via Data Selection [74.06562124781104]
We present the first Training underlineFree Heterogeneous Graph Condensation method, termed FreeHGC, facilitating both efficient and high-quality generation of heterogeneous condensed graphs.<n>Specifically, we reformulate the heterogeneous graph condensation problem as a data selection issue, offering a new perspective for assessing and condensing representative nodes and edges in the heterogeneous graphs.
arXiv Detail & Related papers (2024-12-20T02:49:32Z) - Graph Sparsification via Mixture of Graphs [67.40204130771967]
We introduce Mixture-of-Graphs (MoG) to dynamically select tailored pruning solutions for each node.
MoG incorporates multiple sparsifier experts, each characterized by unique sparsity levels and pruning criteria, and selects the appropriate experts for each node.
Experiments on four large-scale OGB datasets and two superpixel datasets, equipped with five GNNs, demonstrate that MoG identifies subgraphs at higher sparsity levels.
arXiv Detail & Related papers (2024-05-23T07:40:21Z) - HeteroMILE: a Multi-Level Graph Representation Learning Framework for Heterogeneous Graphs [13.01983932286923]
We propose a Multi-Level Embedding framework of nodes on a heterogeneous graph (HeteroMILE)
HeteroMILE repeatedly coarsens the large sized graph into a smaller size while preserving the backbone structure of the graph before embedding it.
It then refines the coarsened embedding to the original graph using a heterogeneous graph convolution neural network.
arXiv Detail & Related papers (2024-03-31T22:22:10Z) - From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited [51.24526202984846]
Graph-based semi-supervised learning (GSSL) has long been a hot research topic.<n> graph convolutional networks (GCNs) have become the predominant techniques for their promising performance.<n>We propose three simple but powerful graph convolution methods.
arXiv Detail & Related papers (2023-09-24T10:10:21Z) - Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications [44.15102300886821]
We propose a novel graph mixup algorithm called FGWMixup, which seeks a midpoint of source graphs in the Fused Gromov-Wasserstein metric space.
Experiments conducted on five datasets demonstrate that FGWMixup effectively improves the generalizability and robustness of GNNs.
arXiv Detail & Related papers (2023-06-28T07:00:12Z) - Beyond Homophily: Reconstructing Structure for Graph-agnostic Clustering [15.764819403555512]
It is impossible to first identify a graph as homophilic or heterophilic before a suitable GNN model can be found.
We propose a novel graph clustering method, which contains three key components: graph reconstruction, a mixed filter, and dual graph clustering network.
Our method dominates others on heterophilic graphs.
arXiv Detail & Related papers (2023-05-03T01:49:01Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Simple Truncated SVD based Model for Node Classification on Heterophilic
Graphs [0.5309004257911242]
Graph Neural Networks (GNNs) have shown excellent performance on graphs that exhibit strong homophily.
Recent approaches have typically modified aggregation schemes, designed adaptive graph filters, etc. to address this limitation.
We propose a simple alternative method that exploits Truncated Singular Value Decomposition (TSVD) of topological structure and node features.
arXiv Detail & Related papers (2021-06-24T07:48:18Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Scalable Deep Generative Modeling for Sparse Graphs [105.60961114312686]
Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
arXiv Detail & Related papers (2020-06-28T04:37:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.