Universal Link Predictor By In-Context Learning on Graphs
- URL: http://arxiv.org/abs/2402.07738v2
- Date: Thu, 15 Feb 2024 15:19:30 GMT
- Title: Universal Link Predictor By In-Context Learning on Graphs
- Authors: Kaiwen Dong, Haitao Mao, Zhichun Guo, Nitesh V. Chawla
- Abstract summary: We introduce the Universal Link Predictor (UniLP), a novel model that combines the generalizability of approaches with the pattern learning capabilities of parametric models.
UniLP is designed to autonomously identify connectivity patterns across diverse graphs, ready for immediate application to any unseen graph dataset without targeted training.
- Score: 27.394215950768643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction is a crucial task in graph machine learning, where the goal
is to infer missing or future links within a graph. Traditional approaches
leverage heuristic methods based on widely observed connectivity patterns,
offering broad applicability and generalizability without the need for model
training. Despite their utility, these methods are limited by their reliance on
human-derived heuristics and lack the adaptability of data-driven approaches.
Conversely, parametric link predictors excel in automatically learning the
connectivity patterns from data and achieving state-of-the-art but fail short
to directly transfer across different graphs. Instead, it requires the cost of
extensive training and hyperparameter optimization to adapt to the target
graph. In this work, we introduce the Universal Link Predictor (UniLP), a novel
model that combines the generalizability of heuristic approaches with the
pattern learning capabilities of parametric models. UniLP is designed to
autonomously identify connectivity patterns across diverse graphs, ready for
immediate application to any unseen graph dataset without targeted training. We
address the challenge of conflicting connectivity patterns-arising from the
unique distributions of different graphs-through the implementation of
In-context Learning (ICL). This approach allows UniLP to dynamically adjust to
various target graphs based on contextual demonstrations, thereby avoiding
negative transfer. Through rigorous experimentation, we demonstrate UniLP's
effectiveness in adapting to new, unseen graphs at test time, showcasing its
ability to perform comparably or even outperform parametric models that have
been finetuned for specific datasets. Our findings highlight UniLP's potential
to set a new standard in link prediction, combining the strengths of heuristic
and parametric methods in a single, versatile framework.
Related papers
- Toward Personalized Federated Node Classification in One-shot Communication [27.325478113745206]
We propose a one-shot personalized Federated Graph Learning method for node classification.
Our method estimates and aggregates class-wise feature distribution statistics to construct a global pseudo-graph on the server.
Our method significantly outperforms state-of-the-art baselines across various settings.
arXiv Detail & Related papers (2024-11-18T05:59:29Z) - GALA: Graph Diffusion-based Alignment with Jigsaw for Source-free Domain Adaptation [13.317620250521124]
Source-free domain adaptation is a crucial machine learning topic, as it contains numerous applications in the real world.
Recent graph neural network (GNN) approaches can suffer from serious performance decline due to domain shift and label scarcity.
We propose a novel method named Graph Diffusion-based Alignment with Jigsaw (GALA), tailored for source-free graph domain adaptation.
arXiv Detail & Related papers (2024-10-22T01:32:46Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - GraphControl: Adding Conditional Control to Universal Graph Pre-trained
Models for Graph Domain Transfer Learning [28.04023419006392]
Graph self-supervised algorithms have achieved significant success in acquiring generic knowledge from abundant unlabeled graph data.
Different graphs, even across seemingly similar domains, can differ significantly in terms of attribute semantics.
We introduce an innovative deployment module coined as GraphControl, motivated by ControlNet, to realize better graph domain transfer learning.
arXiv Detail & Related papers (2023-10-11T10:30:49Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - Deepened Graph Auto-Encoders Help Stabilize and Enhance Link Prediction [11.927046591097623]
Link prediction is a relatively under-studied graph learning task, with current state-of-the-art models based on one- or two-layers of shallow graph auto-encoder (GAE) architectures.
In this paper, we focus on addressing a limitation of current methods for link prediction, which can only use shallow GAEs and variational GAEs.
Our proposed methods innovatively incorporate standard auto-encoders (AEs) into the architectures of GAEs, where standard AEs are leveraged to learn essential, low-dimensional representations via seamlessly integrating the adjacency information and node features
arXiv Detail & Related papers (2021-03-21T14:43:10Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.