GraDE: A Graph Diffusion Estimator for Frequent Subgraph Discovery in Neural Architectures
- URL: http://arxiv.org/abs/2602.03257v1
- Date: Tue, 03 Feb 2026 08:41:10 GMT
- Title: GraDE: A Graph Diffusion Estimator for Frequent Subgraph Discovery in Neural Architectures
- Authors: Yikang Yang, Zhengxin Yang, Minghao Luo, Luzhou Peng, Hongxiao Li, Wanling Gao, Lei Wang, Jianfeng Zhan,
- Abstract summary: GraDE is a diffusion-guided search framework that ensures both computational feasibility and discovery capability.<n>GraDE is the first to introduce graph diffusion models to identify frequent subgraphs by scoring their typicality within the learned distribution.
- Score: 5.269719189733366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding frequently occurring subgraph patterns or network motifs in neural architectures is crucial for optimizing efficiency, accelerating design, and uncovering structural insights. However, as the subgraph size increases, enumeration-based methods are perfectly accurate but computationally prohibitive, while sampling-based methods are computationally tractable but suffer from a severe decline in discovery capability. To address these challenges, this paper proposes GraDE, a diffusion-guided search framework that ensures both computational feasibility and discovery capability. The key innovation is the Graph Diffusion Estimator (GraDE), which is the first to introduce graph diffusion models to identify frequent subgraphs by scoring their typicality within the learned distribution. Comprehensive experiments demonstrate that the estimator achieves superior ranking accuracy, with up to 114\% improvement compared to sampling-based baselines. Benefiting from this, the proposed framework successfully discovers large-scale frequent patterns, achieving up to 30$\times$ higher median frequency than sampling-based methods.
Related papers
- Test-time scaling of diffusions with flow maps [68.79792714591564]
A common recipe to improve diffusion models at test-time is to introduce the gradient of the reward into the dynamics of the diffusion itself.<n>We propose a simple solution by working directly with a flow map.<n>By exploiting a relationship between the flow map and velocity field governing the instantaneous transport, we construct an algorithm, Flow Map Trajectory Tilting (FMTT), which provably performs better ascent on the reward than standard test-time methods.
arXiv Detail & Related papers (2025-11-27T18:44:12Z) - Studying and Improving Graph Neural Network-based Motif Estimation [0.24578723416255746]
Graph Neural Networks (GNNs) are a predominant method for graph representation learning.<n>Their application to network motif significance-profile (SP) prediction remains under-explored, with no established benchmarks in the literature.<n>We propose to address this problem, framing SP estimation as a task independent of subgraph frequency estimation.<n>Our approach shifts from frequency counting to direct SP estimation and modulates the problem as multitarget regression.
arXiv Detail & Related papers (2025-05-30T15:17:23Z) - Out-of-Distribution Detection on Graphs: A Survey [58.47395497985277]
Graph out-of-distribution (GOOD) detection focuses on identifying graph data that deviates from the distribution seen during training.<n>We categorize existing methods into four types: enhancement-based, reconstruction-based, information propagation-based, and classification-based approaches.<n>We discuss practical applications and theoretical foundations, highlighting the unique challenges posed by graph data.
arXiv Detail & Related papers (2025-02-12T04:07:12Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Hub-aware Random Walk Graph Embedding Methods for Classification [44.99833362998488]
We propose two novel graph embedding algorithms based on random walks that are specifically designed for the node classification problem.
The proposed methods are experimentally evaluated by analyzing the classification performance of three classification algorithms trained on embeddings of real-world networks.
arXiv Detail & Related papers (2022-09-15T20:41:18Z) - Subgraph Frequency Distribution Estimation using Graph Neural Networks [17.02487540304784]
We propose GNNS, a novel representational learning framework that utilizes graph neural networks to sample subgraphs efficiently for estimating their frequency distribution.
Our framework includes an inference model and a generative model that learns hierarchical embeddings of nodes, subgraphs, and graph types.
With the learned model and embeddings, subgraphs are sampled in a highly scalable and parallel way and the frequency distribution estimation is then performed based on these sampled subgraphs.
arXiv Detail & Related papers (2022-07-14T06:23:38Z) - Ordered Subgraph Aggregation Networks [19.18478955240166]
Subgraph-enhanced graph neural networks (GNNs) have emerged, provably boosting the expressive power of standard (message-passing) GNNs.
Here, we introduce a theoretical framework and extend the known expressivity results of subgraph-enhanced GNNs.
We show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations.
arXiv Detail & Related papers (2022-06-22T15:19:34Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.