Transfer Learning on Edge Connecting Probability Estimation under Graphon Model
- URL: http://arxiv.org/abs/2510.05527v2
- Date: Sat, 25 Oct 2025 03:44:16 GMT
- Title: Transfer Learning on Edge Connecting Probability Estimation under Graphon Model
- Authors: Yuyao Wang, Yu-Hung Cheng, Debarghya Mukherjee, Huimin Cheng,
- Abstract summary: We propose a transfer learning framework that integrates neighborhood smoothing and Gromov-Wasserstein optimal transport to align and transfer structural patterns between graphs.<n>GTRANS includes an adaptive debiasing mechanism that identifies and corrects for target-specific deviations via residual smoothing.<n>These improvements translate directly to enhanced performance in downstream applications, such as the graph classification task and the link prediction task.
- Score: 7.805468525082696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphon models provide a flexible nonparametric framework for estimating latent connectivity probabilities in networks, enabling a range of downstream applications such as link prediction and data augmentation. However, accurate graphon estimation typically requires a large graph, whereas in practice, one often only observes a small-sized network. One approach to addressing this issue is to adopt a transfer learning framework, which aims to improve estimation in a small target graph by leveraging structural information from a larger, related source graph. In this paper, we propose a novel method, namely GTRANS, a transfer learning framework that integrates neighborhood smoothing and Gromov-Wasserstein optimal transport to align and transfer structural patterns between graphs. To prevent negative transfer, GTRANS includes an adaptive debiasing mechanism that identifies and corrects for target-specific deviations via residual smoothing. We provide theoretical guarantees on the stability of the estimated alignment matrix and demonstrate the effectiveness of GTRANS in improving the accuracy of target graph estimation through extensive synthetic and real data experiments. These improvements translate directly to enhanced performance in downstream applications, such as the graph classification task and the link prediction task.
Related papers
- Regularized Adaptive Graph Learning for Large-Scale Traffic Forecasting [5.212619320601785]
We propose a Regularized Adaptive Graph Learning (RAG) model for traffic prediction.<n>We show that RAGL consistently outperforms state-of-the-art methods in terms of prediction accuracy and exhibits competitive computational efficiency.<n> experiments on four large-scale real-world traffic datasets show that RAGL consistently outperforms state-of-the-art methods in terms of prediction accuracy and exhibits competitive computational efficiency.
arXiv Detail & Related papers (2025-06-08T14:58:27Z) - HGOT: Self-supervised Heterogeneous Graph Neural Network with Optimal Transport [29.705206754426953]
We propose a novel self-supervised Heterogeneous graph neural network with Optimal Transport (HGOT) method.<n>HGOT employs the optimal transport mechanism to relieve the laborious sampling process of positive and negative samples.<n>In the node classification task, HGOT achieves an average of more than 6% improvement in accuracy compared with state-of-the-art methods.
arXiv Detail & Related papers (2025-06-03T08:35:29Z) - Smoothness Really Matters: A Simple Yet Effective Approach for Unsupervised Graph Domain Adaptation [28.214010408550394]
Unsupervised Graph Domain Adaptation (UGDA) seeks to bridge distribution shifts between domains by transferring knowledge from labeled source graphs to given unlabeled target graphs.<n>We introduce a novel approach for UGDA called Target-Domain Structural Smoothing ( TDSS)<n> TDSS is a simple and effective method designed to perform structural smoothing directly on the target graph.
arXiv Detail & Related papers (2024-12-16T10:56:58Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - CONVERT:Contrastive Graph Clustering with Reliable Augmentation [110.46658439733106]
We propose a novel CONtrastiVe Graph ClustEring network with Reliable AugmenTation (CONVERT)
In our method, the data augmentations are processed by the proposed reversible perturb-recover network.
To further guarantee the reliability of semantics, a novel semantic loss is presented to constrain the network.
arXiv Detail & Related papers (2023-08-17T13:07:09Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - A Graph Data Augmentation Strategy with Entropy Preserving [11.886325179121226]
We introduce a novel graph entropy definition as a quantitative index to evaluate feature information among a graph.
Under considerations of preserving graph entropy, we propose an effective strategy to generate training data using a perturbed mechanism.
Our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.
arXiv Detail & Related papers (2021-07-13T12:58:32Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.