A Condensed Transition Graph Framework for Zero-shot Link Prediction
with Large Language Models
- URL: http://arxiv.org/abs/2402.10779v1
- Date: Fri, 16 Feb 2024 16:02:33 GMT
- Title: A Condensed Transition Graph Framework for Zero-shot Link Prediction
with Large Language Models
- Authors: Mingchen Li, Chen Ling, Rui Zhang, Liang Zhao
- Abstract summary: We introduce a Condensed Transition Graph Framework for Zero-Shot Link Prediction (CTLP)
CTLP encodes all the paths' information in linear time complexity to predict unseen relations between entities.
Our proposed CTLP method achieves state-of-the-art performance on three standard ZSLP datasets.
- Score: 22.089751438495956
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Zero-shot link prediction (ZSLP) on knowledge graphs aims at automatically
identifying relations between given entities. Existing methods primarily employ
auxiliary information to predict tail entity given head entity and its
relation, yet face challenges due to the occasional unavailability of such
detailed information and the inherent simplicity of predicting tail entities
based on semantic similarities. Even though Large Language Models (LLMs) offer
a promising solution to predict unobserved relations between the head and tail
entity in a zero-shot manner, their performance is still restricted due to the
inability to leverage all the (exponentially many) paths' information between
two entities, which are critical in collectively indicating their relation
types. To address this, in this work, we introduce a Condensed Transition Graph
Framework for Zero-Shot Link Prediction (CTLP), which encodes all the paths'
information in linear time complexity to predict unseen relations between
entities, attaining both efficiency and information preservation. Specifically,
we design a condensed transition graph encoder with theoretical guarantees on
its coverage, expressiveness, and efficiency. It is learned by a transition
graph contrastive learning strategy. Subsequently, we design a soft instruction
tuning to learn and map the all-path embedding to the input of LLMs.
Experimental results show that our proposed CTLP method achieves
state-of-the-art performance on three standard ZSLP datasets
Related papers
- S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - Universal Link Predictor By In-Context Learning on Graphs [27.394215950768643]
We introduce the Universal Link Predictor (UniLP), a novel model that combines the generalizability of approaches with the pattern learning capabilities of parametric models.
UniLP is designed to autonomously identify connectivity patterns across diverse graphs, ready for immediate application to any unseen graph dataset without targeted training.
arXiv Detail & Related papers (2024-02-12T15:52:27Z) - RLIPv2: Fast Scaling of Relational Language-Image Pre-training [53.21796397618875]
We propose RLIPv2, a fast converging model that enables the relational scaling of pre-training to large-scale pseudo-labelled scene graph data.
Asymmetric Language-Image Fusion (ALIF) facilitates earlier and deeper gated cross-modal fusion with sparsified language encoding.
RLIPv2 shows state-of-the-art performance on three benchmarks under fully-finetuning, few-shot and zero-shot settings.
arXiv Detail & Related papers (2023-08-18T07:17:09Z) - Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion [69.55700751102376]
Few-shot knowledge graph completion (FKGC) aims to predict missing facts for unseen relations with few-shot associated facts.
Existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems.
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC)
arXiv Detail & Related papers (2023-04-17T11:42:28Z) - Joint embedding in Hierarchical distance and semantic representation
learning for link prediction [4.18621837986466]
We propose a novel knowledge graph embedding model for the link prediction task, namely, HIE.
HIE models each triplet (textith, textitr, textitt) into distance measurement space and semantic measurement space, simultaneously.
HIE is introduced into hierarchical-aware space to leverage rich hierarchical information of entities and relations for better representation learning.
arXiv Detail & Related papers (2023-03-28T00:42:29Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - SMiLE: Schema-augmented Multi-level Contrastive Learning for Knowledge
Graph Link Prediction [28.87290783250351]
Link prediction is the task of inferring missing links between entities in knowledge graphs.
We propose a novel Multi-level contrastive LEarning framework (SMiLE) to conduct knowledge graph link prediction.
arXiv Detail & Related papers (2022-10-10T17:40:19Z) - ConstGCN: Constrained Transmission-based Graph Convolutional Networks
for Document-level Relation Extraction [24.970508961370548]
Document-level relation extraction with graph neural networks faces a fundamental graph construction gap between training and inference.
We propose $textbfConstGCN$, a novel graph convolutional network which performs knowledge-based information propagation between entities.
Experimental results show that our method outperforms the previous state-of-the-art (SOTA) approaches on the DocRE dataset.
arXiv Detail & Related papers (2022-10-08T07:36:04Z) - TranS: Transition-based Knowledge Graph Embedding with Synthetic
Relation Representation [14.759663752868487]
We propose a novel transition-based method, TranS, for knowledge graph embedding.
The single relation vector in traditional scoring patterns is replaced with synthetic relation representation, which can solve these issues effectively and efficiently.
Experiments on a large knowledge graph dataset, ogbl-wikikg2, show that our model achieves state-of-the-art results.
arXiv Detail & Related papers (2022-04-18T16:55:25Z) - R-VGAE: Relational-variational Graph Autoencoder for Unsupervised
Prerequisite Chain Learning [83.13634692459486]
We propose a model called Graph AutoEncoder (VGA-E) to predict concept relations within a graph consisting of concept resource nodes.
Results show that our unsupervised approach outperforms graph-based semi-supervised methods and other baseline methods by up to 9.77% and 10.47% in terms of prerequisite relation prediction accuracy and F1 score.
Our method is notably the first graph-based model that attempts to make use of deep learning representations for the task of unsupervised prerequisite learning.
arXiv Detail & Related papers (2020-04-22T14:48:03Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.