Anchoring Path for Inductive Relation Prediction in Knowledge Graphs
- URL: http://arxiv.org/abs/2312.13596v1
- Date: Thu, 21 Dec 2023 06:02:25 GMT
- Title: Anchoring Path for Inductive Relation Prediction in Knowledge Graphs
- Authors: Zhixiang Su, Di Wang, Chunyan Miao and Lizhen Cui
- Abstract summary: APST takes both APs and CPs as the inputs of a unified Sentence Transformer architecture.
We evaluate APST on three public datasets and achieve state-of-the-art (SOTA) performance in 30 of 36 transductive, inductive, and few-shot experimental settings.
- Score: 69.81600732388182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming to accurately predict missing edges representing relations between
entities, which are pervasive in real-world Knowledge Graphs (KGs), relation
prediction plays a critical role in enhancing the comprehensiveness and utility
of KGs. Recent research focuses on path-based methods due to their inductive
and explainable properties. However, these methods face a great challenge when
lots of reasoning paths do not form Closed Paths (CPs) in the KG. To address
this challenge, we propose Anchoring Path Sentence Transformer (APST) by
introducing Anchoring Paths (APs) to alleviate the reliance of CPs.
Specifically, we develop a search-based description retrieval method to enrich
entity descriptions and an assessment mechanism to evaluate the rationality of
APs. APST takes both APs and CPs as the inputs of a unified Sentence
Transformer architecture, enabling comprehensive predictions and high-quality
explanations. We evaluate APST on three public datasets and achieve
state-of-the-art (SOTA) performance in 30 of 36 transductive, inductive, and
few-shot experimental settings.
Related papers
- Few-shot Knowledge Graph Relational Reasoning via Subgraph Adaptation [51.47994645529258]
Few-shot Knowledge Graph (KG) Reasoning aims to predict unseen triplets (i.e., query triplets) for rare relations in KGs.
We propose SAFER (Subgraph Adaptation for Few-shot Reasoning), a novel approach that effectively adapts the information in contextualized graphs to various subgraphs.
arXiv Detail & Related papers (2024-06-19T21:40:35Z) - Query-Enhanced Adaptive Semantic Path Reasoning for Inductive Knowledge Graph Completion [45.9995456784049]
This paper proposes the Query-Enhanced Adaptive Semantic Path Reasoning (QASPR) framework.
QASPR captures both the structural and semantic information of KGs to enhance the inductive KGC task.
experimental results demonstrate that QASPR achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-06-04T11:02:15Z) - KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion [18.497296711526268]
We present KGExplainer, a model-agnostic method that identifies connected subgraphs and distills an evaluator to assess them quantitatively.
Experiments on benchmark datasets demonstrate that KGExplainer achieves promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
arXiv Detail & Related papers (2024-04-05T05:02:12Z) - Ladder-of-Thought: Using Knowledge as Steps to Elevate Stance Detection [73.31406286956535]
We introduce the Ladder-of-Thought (LoT) for the stance detection task.
LoT directs the small LMs to assimilate high-quality external knowledge, refining the intermediate rationales produced.
Our empirical evaluations underscore LoT's efficacy, marking a 16% improvement over GPT-3.5 and a 10% enhancement compared to GPT-3.5 with CoT on stance detection task.
arXiv Detail & Related papers (2023-08-31T14:31:48Z) - Exploring & Exploiting High-Order Graph Structure for Sparse Knowledge
Graph Completion [20.45256490854869]
We present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities.
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
arXiv Detail & Related papers (2023-06-29T15:35:34Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - Inductive Relation Prediction from Relational Paths and Context with
Hierarchical Transformers [23.07740200588382]
This paper proposes a novel method that captures both connections between entities and the intrinsic nature of entities.
REPORT relies solely on relation semantics and can naturally generalize to the fully-inductive setting.
In the experiments, REPORT performs consistently better than all baselines on almost all the eight version subsets of two fully-inductive datasets.
arXiv Detail & Related papers (2023-04-01T03:49:47Z) - Multi-Aspect Explainable Inductive Relation Prediction by Sentence
Transformer [60.75757851637566]
We introduce the concepts of relation path coverage and relation path confidence to filter out unreliable paths prior to model training to elevate the model performance.
We propose Knowledge Reasoning Sentence Transformer (KRST) to predict inductive relations in knowledge graphs.
arXiv Detail & Related papers (2023-01-04T15:33:49Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.