Fractional Heat Kernel for Semi-Supervised Graph Learning with Small Training Sample Size
- URL: http://arxiv.org/abs/2510.04440v1
- Date: Mon, 06 Oct 2025 02:15:46 GMT
- Title: Fractional Heat Kernel for Semi-Supervised Graph Learning with Small Training Sample Size
- Authors: Farid Bozorgnia, Vyacheslav Kungurtsev, Shirali Kadyrov, Mohsen Yousefnezhad,
- Abstract summary: We introduce novel algorithms for label propagation and self-training using fractional heat kernel dynamics with a source term.<n>We integrate the fractional heat kernel into Graph Neural Network architectures such as Graph Convolutional Networks and Graph Attention.<n>We demonstrate the effectiveness of this approach on standard datasets.
- Score: 4.067682699655706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce novel algorithms for label propagation and self-training using fractional heat kernel dynamics with a source term. We motivate the methodology through the classical correspondence of information theory with the physics of parabolic evolution equations. We integrate the fractional heat kernel into Graph Neural Network architectures such as Graph Convolutional Networks and Graph Attention, enhancing their expressiveness through adaptive, multi-hop diffusion. By applying Chebyshev polynomial approximations, large graphs become computationally feasible. Motivating variational formulations demonstrate that by extending the classical diffusion model to fractional powers of the Laplacian, nonlocal interactions deliver more globally diffusing labels. The particular balance between supervision of known labels and diffusion across the graph is particularly advantageous in the case where only a small number of labeled training examples are present. We demonstrate the effectiveness of this approach on standard datasets.
Related papers
- Generator-based Graph Generation via Heat Diffusion [9.143285110847138]
We propose a novel framework for generating graphs by adapting the Generator Matching paradigm to graph-structured data.<n>We leverage the graph Laplacian and its associated heat kernel to define a continous-time diffusion on each graph.<n>A neural network is trained to match this generator by minimising a Bregman divergence between the true generator and a learnable surrogate.
arXiv Detail & Related papers (2026-02-03T15:04:58Z) - Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks [55.227976642410766]
dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning.<n>Motivated by this, we introduce (port-)Hamiltonian Deep Graph Networks.<n>We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors.
arXiv Detail & Related papers (2024-05-27T13:36:50Z) - Revealing Decurve Flows for Generalized Graph Propagation [108.80758541147418]
This study addresses the limitations of the traditional analysis of message-passing, central to graph learning, by defining em textbfgeneralized propagation with directed and weighted graphs.
We include a preliminary exploration of learned propagation patterns in datasets, a first in the field.
arXiv Detail & Related papers (2024-02-13T14:13:17Z) - Supercharging Graph Transformers with Advective Diffusion [28.40109111316014]
This paper proposes Advective Diffusion Transformer (AdvDIFFormer), a physics-inspired graph Transformer model designed to address this challenge.<n>We show that AdvDIFFormer has provable capability for controlling generalization error with topological shifts.<n> Empirically, the model demonstrates superiority in various predictive tasks across information networks, molecular screening and protein interactions.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - A Fractional Graph Laplacian Approach to Oversmoothing [15.795926248847026]
We generalize the concept of oversmoothing from undirected to directed graphs.
We propose fractional graph Laplacian neural ODEs, which describe non-local dynamics.
Our method is more flexible with respect to the convergence of the graph's Dirichlet energy, thereby mitigating oversmoothing.
arXiv Detail & Related papers (2023-05-22T14:52:33Z) - Conditional Diffusion Based on Discrete Graph Structures for Molecular
Graph Generation [32.66694406638287]
We propose a Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation.
Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through differential equations (SDE)
We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states.
arXiv Detail & Related papers (2023-01-01T15:24:15Z) - Transductive Kernels for Gaussian Processes on Graphs [7.542220697870243]
We present a novel kernel for graphs with node feature data for semi-supervised learning.
The kernel is derived from a regularization framework by treating the graph and feature data as two spaces.
We show how numerous kernel-based models on graphs are instances of our design.
arXiv Detail & Related papers (2022-11-28T14:00:50Z) - DiGress: Discrete Denoising diffusion for graph generation [79.13904438217592]
DiGress is a discrete denoising diffusion model for generating graphs with categorical node and edge attributes.
It achieves state-of-the-art performance on molecular and non-molecular datasets, with up to 3x validity improvement.
It is also the first model to scale to the large GuacaMol dataset containing 1.3M drug-like molecules.
arXiv Detail & Related papers (2022-09-29T12:55:03Z) - Capturing Graphs with Hypo-Elliptic Diffusions [7.704064306361941]
We show that the distribution of random walks evolves according to a diffusion equation defined using the graph Laplacian.
This results in a novel tensor-valued graph operator, which we call the hypo-elliptic graph Laplacian.
We show that this method competes with graph transformers on datasets requiring long-range reasoning but scales only linearly in the number of edges.
arXiv Detail & Related papers (2022-05-27T16:47:34Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Non-separable Spatio-temporal Graph Kernels via SPDEs [69.4678086015418]
A lack of justified graph kernels for principled-temporal modelling has held back their use in graph problems.<n>We leverage a link between partial differential equations (SPDEs) and onsepa-temporal graphs, introduce a framework for deriving graph kernels via SPDEs.<n>We show that by providing novel tools for GP modelling on graphs, we outperform pre-existing graph kernels in real-world applications.
arXiv Detail & Related papers (2021-11-16T14:53:19Z) - Hyperbolic Graph Embedding with Enhanced Semi-Implicit Variational
Inference [48.63194907060615]
We build off of semi-implicit graph variational auto-encoders to capture higher-order statistics in a low-dimensional graph latent representation.
We incorporate hyperbolic geometry in the latent space through a Poincare embedding to efficiently represent graphs exhibiting hierarchical structure.
arXiv Detail & Related papers (2020-10-31T05:48:34Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.